00:00:00.001 Started by upstream project "autotest-per-patch" build number 132813 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:10.638 The recommended git tool is: git 00:00:10.638 using credential 00000000-0000-0000-0000-000000000002 00:00:10.640 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:10.653 Fetching changes from the remote Git repository 00:00:10.655 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.670 Using shallow fetch with depth 1 00:00:10.670 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:10.670 > git --version # timeout=10 00:00:10.683 > git --version # 'git version 2.39.2' 00:00:10.683 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.696 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.696 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:17.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:17.228 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:17.240 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:17.240 > git config core.sparsecheckout # timeout=10 00:00:17.253 > git read-tree -mu HEAD # timeout=10 00:00:17.271 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:17.299 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:17.299 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:17.423 [Pipeline] Start of Pipeline 00:00:17.432 [Pipeline] library 00:00:17.433 Loading library shm_lib@master 00:00:17.433 Library shm_lib@master is cached. Copying from home. 00:00:17.450 [Pipeline] node 00:00:17.478 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:17.480 [Pipeline] { 00:00:17.490 [Pipeline] catchError 00:00:17.492 [Pipeline] { 00:00:17.504 [Pipeline] wrap 00:00:17.513 [Pipeline] { 00:00:17.521 [Pipeline] stage 00:00:17.522 [Pipeline] { (Prologue) 00:00:17.786 [Pipeline] sh 00:00:18.682 + logger -p user.info -t JENKINS-CI 00:00:18.711 [Pipeline] echo 00:00:18.712 Node: WFP22 00:00:18.719 [Pipeline] sh 00:00:19.053 [Pipeline] setCustomBuildProperty 00:00:19.062 [Pipeline] echo 00:00:19.063 Cleanup processes 00:00:19.067 [Pipeline] sh 00:00:19.355 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:19.355 127407 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:19.368 [Pipeline] sh 00:00:19.661 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:19.661 ++ grep -v 'sudo pgrep' 00:00:19.661 ++ awk '{print $1}' 00:00:19.661 + sudo kill -9 00:00:19.661 + true 00:00:19.675 [Pipeline] cleanWs 00:00:19.684 [WS-CLEANUP] Deleting project workspace... 00:00:19.684 [WS-CLEANUP] Deferred wipeout is used... 00:00:19.697 [WS-CLEANUP] done 00:00:19.701 [Pipeline] setCustomBuildProperty 00:00:19.715 [Pipeline] sh 00:00:20.006 + sudo git config --global --replace-all safe.directory '*' 00:00:20.105 [Pipeline] httpRequest 00:00:22.052 [Pipeline] echo 00:00:22.053 Sorcerer 10.211.164.112 is alive 00:00:22.062 [Pipeline] retry 00:00:22.063 [Pipeline] { 00:00:22.077 [Pipeline] httpRequest 00:00:22.083 HttpMethod: GET 00:00:22.084 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.085 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.095 Response Code: HTTP/1.1 200 OK 00:00:22.095 Success: Status code 200 is in the accepted range: 200,404 00:00:22.096 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.217 [Pipeline] } 00:00:47.235 [Pipeline] // retry 00:00:47.243 [Pipeline] sh 00:00:47.536 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.552 [Pipeline] httpRequest 00:00:47.950 [Pipeline] echo 00:00:47.952 Sorcerer 10.211.164.112 is alive 00:00:47.960 [Pipeline] retry 00:00:47.962 [Pipeline] { 00:00:47.975 [Pipeline] httpRequest 00:00:47.980 HttpMethod: GET 00:00:47.981 URL: http://10.211.164.112/packages/spdk_969b360d978be792569856fb657762eef27f3c68.tar.gz 00:00:47.982 Sending request to url: http://10.211.164.112/packages/spdk_969b360d978be792569856fb657762eef27f3c68.tar.gz 00:00:47.988 Response Code: HTTP/1.1 200 OK 00:00:47.988 Success: Status code 200 is in the accepted range: 200,404 00:00:47.989 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_969b360d978be792569856fb657762eef27f3c68.tar.gz 00:06:58.557 [Pipeline] } 00:06:58.574 [Pipeline] // retry 00:06:58.581 [Pipeline] sh 00:06:58.873 + tar --no-same-owner -xf spdk_969b360d978be792569856fb657762eef27f3c68.tar.gz 00:07:01.429 [Pipeline] sh 00:07:01.721 + git -C spdk log --oneline -n5 00:07:01.721 969b360d9 thread: fd_group-based interrupts 00:07:01.721 851f166ec thread: move interrupt allocation to a function 00:07:01.721 c12cb8fe3 util: add method for setting fd_group's wrapper 00:07:01.721 43c35d804 util: multi-level fd_group nesting 00:07:01.721 6336b7c5c util: keep track of nested child fd_groups 00:07:01.733 [Pipeline] } 00:07:01.745 [Pipeline] // stage 00:07:01.756 [Pipeline] stage 00:07:01.758 [Pipeline] { (Prepare) 00:07:01.774 [Pipeline] writeFile 00:07:01.792 [Pipeline] sh 00:07:02.084 + logger -p user.info -t JENKINS-CI 00:07:02.097 [Pipeline] sh 00:07:02.392 + logger -p user.info -t JENKINS-CI 00:07:02.405 [Pipeline] sh 00:07:02.693 + cat autorun-spdk.conf 00:07:02.693 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:02.693 SPDK_TEST_NVMF=1 00:07:02.693 SPDK_TEST_NVME_CLI=1 00:07:02.693 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:02.693 SPDK_TEST_NVMF_NICS=e810 00:07:02.693 SPDK_TEST_VFIOUSER=1 00:07:02.693 SPDK_RUN_UBSAN=1 00:07:02.693 NET_TYPE=phy 00:07:02.701 RUN_NIGHTLY=0 00:07:02.706 [Pipeline] readFile 00:07:02.752 [Pipeline] withEnv 00:07:02.754 [Pipeline] { 00:07:02.767 [Pipeline] sh 00:07:03.058 + set -ex 00:07:03.058 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:07:03.058 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:03.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:03.058 ++ SPDK_TEST_NVMF=1 00:07:03.058 ++ SPDK_TEST_NVME_CLI=1 00:07:03.058 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:03.058 ++ SPDK_TEST_NVMF_NICS=e810 00:07:03.058 ++ SPDK_TEST_VFIOUSER=1 00:07:03.058 ++ SPDK_RUN_UBSAN=1 00:07:03.058 ++ NET_TYPE=phy 00:07:03.058 ++ RUN_NIGHTLY=0 00:07:03.058 + case $SPDK_TEST_NVMF_NICS in 00:07:03.058 + DRIVERS=ice 00:07:03.058 + [[ tcp == \r\d\m\a ]] 00:07:03.058 + [[ -n ice ]] 00:07:03.058 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:07:03.058 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:09.645 rmmod: ERROR: Module irdma is not currently loaded 00:07:09.645 rmmod: ERROR: Module i40iw is not currently loaded 00:07:09.645 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:09.645 + true 00:07:09.645 + for D in $DRIVERS 00:07:09.645 + sudo modprobe ice 00:07:09.645 + exit 0 00:07:09.656 [Pipeline] } 00:07:09.670 [Pipeline] // withEnv 00:07:09.674 [Pipeline] } 00:07:09.687 [Pipeline] // stage 00:07:09.696 [Pipeline] catchError 00:07:09.698 [Pipeline] { 00:07:09.712 [Pipeline] timeout 00:07:09.713 Timeout set to expire in 1 hr 0 min 00:07:09.714 [Pipeline] { 00:07:09.727 [Pipeline] stage 00:07:09.729 [Pipeline] { (Tests) 00:07:09.743 [Pipeline] sh 00:07:10.041 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:10.041 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:10.041 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:10.041 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:07:10.041 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:10.041 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:10.041 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:07:10.041 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:10.041 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:10.041 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:10.041 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:07:10.041 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:10.041 + source /etc/os-release 00:07:10.041 ++ NAME='Fedora Linux' 00:07:10.041 ++ VERSION='39 (Cloud Edition)' 00:07:10.041 ++ ID=fedora 00:07:10.041 ++ VERSION_ID=39 00:07:10.041 ++ VERSION_CODENAME= 00:07:10.041 ++ PLATFORM_ID=platform:f39 00:07:10.041 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:10.041 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:10.041 ++ LOGO=fedora-logo-icon 00:07:10.041 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:10.041 ++ HOME_URL=https://fedoraproject.org/ 00:07:10.041 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:10.041 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:10.041 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:10.041 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:10.041 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:10.041 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:10.041 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:10.041 ++ SUPPORT_END=2024-11-12 00:07:10.041 ++ VARIANT='Cloud Edition' 00:07:10.041 ++ VARIANT_ID=cloud 00:07:10.041 + uname -a 00:07:10.041 Linux spdk-wfp-22 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:10.041 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:13.340 Hugepages 00:07:13.340 node hugesize free / total 00:07:13.340 node0 1048576kB 0 / 0 00:07:13.340 node0 2048kB 0 / 0 00:07:13.340 node1 1048576kB 0 / 0 00:07:13.340 node1 2048kB 0 / 0 00:07:13.340 00:07:13.340 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:13.340 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:13.340 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:13.340 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:13.340 + rm -f /tmp/spdk-ld-path 00:07:13.340 + source autorun-spdk.conf 00:07:13.340 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:13.340 ++ SPDK_TEST_NVMF=1 00:07:13.340 ++ SPDK_TEST_NVME_CLI=1 00:07:13.340 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:13.340 ++ SPDK_TEST_NVMF_NICS=e810 00:07:13.340 ++ SPDK_TEST_VFIOUSER=1 00:07:13.340 ++ SPDK_RUN_UBSAN=1 00:07:13.340 ++ NET_TYPE=phy 00:07:13.340 ++ RUN_NIGHTLY=0 00:07:13.340 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:13.340 + [[ -n '' ]] 00:07:13.340 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:13.340 + for M in /var/spdk/build-*-manifest.txt 00:07:13.340 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:13.340 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:13.340 + for M in /var/spdk/build-*-manifest.txt 00:07:13.340 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:13.340 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:13.340 + for M in /var/spdk/build-*-manifest.txt 00:07:13.340 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:13.340 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:13.340 ++ uname 00:07:13.340 + [[ Linux == \L\i\n\u\x ]] 00:07:13.340 + sudo dmesg -T 00:07:13.340 + sudo dmesg --clear 00:07:13.340 + dmesg_pid=129829 00:07:13.340 + [[ Fedora Linux == FreeBSD ]] 00:07:13.340 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.340 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.340 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:13.340 + sudo dmesg -Tw 00:07:13.340 + [[ -x /usr/src/fio-static/fio ]] 00:07:13.340 + export FIO_BIN=/usr/src/fio-static/fio 00:07:13.340 + FIO_BIN=/usr/src/fio-static/fio 00:07:13.340 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:13.340 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:13.340 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:13.340 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.340 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.340 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:13.340 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.340 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.340 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:13.340 23:48:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:13.340 23:48:57 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:07:13.340 23:48:57 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:07:13.340 23:48:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:13.340 23:48:57 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:13.340 23:48:57 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:13.340 23:48:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.340 23:48:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:13.340 23:48:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:13.340 23:48:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.340 23:48:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.340 23:48:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.340 23:48:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.340 23:48:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.340 23:48:57 -- paths/export.sh@5 -- $ export PATH 00:07:13.340 23:48:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.341 23:48:57 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:13.341 23:48:57 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:13.341 23:48:57 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784537.XXXXXX 00:07:13.341 23:48:57 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784537.qVna0u 00:07:13.341 23:48:57 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:13.341 23:48:57 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:13.341 23:48:57 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:07:13.341 23:48:57 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:13.341 23:48:57 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:13.341 23:48:57 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:13.341 23:48:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:13.341 23:48:57 -- common/autotest_common.sh@10 -- $ set +x 00:07:13.341 23:48:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:13.341 23:48:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:13.341 23:48:57 -- pm/common@17 -- $ local monitor 00:07:13.341 23:48:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:13.341 23:48:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:13.341 23:48:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:13.341 23:48:57 -- pm/common@21 -- $ date +%s 00:07:13.341 23:48:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:13.341 23:48:57 -- pm/common@21 -- $ date +%s 00:07:13.341 23:48:57 -- pm/common@25 -- $ sleep 1 00:07:13.341 23:48:57 -- pm/common@21 -- $ date +%s 00:07:13.341 23:48:57 -- pm/common@21 -- $ date +%s 00:07:13.341 23:48:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784537 00:07:13.341 23:48:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784537 00:07:13.341 23:48:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784537 00:07:13.341 23:48:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784537 00:07:13.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784537_collect-cpu-load.pm.log 00:07:13.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784537_collect-vmstat.pm.log 00:07:13.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784537_collect-cpu-temp.pm.log 00:07:13.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784537_collect-bmc-pm.bmc.pm.log 00:07:14.543 23:48:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:14.543 23:48:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:14.543 23:48:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:14.543 23:48:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.543 23:48:58 -- spdk/autobuild.sh@16 -- $ date -u 00:07:14.543 Mon Dec 9 10:48:58 PM UTC 2024 00:07:14.543 23:48:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:14.543 v25.01-pre-318-g969b360d9 00:07:14.543 23:48:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:14.543 23:48:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:14.543 23:48:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:14.543 23:48:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:14.543 23:48:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:14.543 23:48:58 -- common/autotest_common.sh@10 -- $ set +x 00:07:14.543 ************************************ 00:07:14.543 START TEST ubsan 00:07:14.543 ************************************ 00:07:14.543 23:48:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:14.543 using ubsan 00:07:14.543 00:07:14.543 real 0m0.001s 00:07:14.543 user 0m0.000s 00:07:14.543 sys 0m0.001s 00:07:14.543 23:48:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:14.543 23:48:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:14.543 ************************************ 00:07:14.543 END TEST ubsan 00:07:14.543 ************************************ 00:07:14.543 23:48:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:14.543 23:48:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:14.543 23:48:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:14.543 23:48:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:07:15.113 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:15.113 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:16.052 Using 'verbs' RDMA provider 00:07:31.895 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:46.793 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:46.793 Creating mk/config.mk...done. 00:07:46.793 Creating mk/cc.flags.mk...done. 00:07:46.793 Type 'make' to build. 00:07:46.793 23:49:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:07:46.793 23:49:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:46.793 23:49:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:46.793 23:49:30 -- common/autotest_common.sh@10 -- $ set +x 00:07:46.793 ************************************ 00:07:46.793 START TEST make 00:07:46.793 ************************************ 00:07:46.793 23:49:30 make -- common/autotest_common.sh@1129 -- $ make -j112 00:07:46.793 make[1]: Nothing to be done for 'all'. 00:07:48.713 The Meson build system 00:07:48.713 Version: 1.5.0 00:07:48.713 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:07:48.713 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:48.713 Build type: native build 00:07:48.713 Project name: libvfio-user 00:07:48.713 Project version: 0.0.1 00:07:48.713 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:48.713 C linker for the host machine: cc ld.bfd 2.40-14 00:07:48.713 Host machine cpu family: x86_64 00:07:48.713 Host machine cpu: x86_64 00:07:48.713 Run-time dependency threads found: YES 00:07:48.713 Library dl found: YES 00:07:48.713 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:48.713 Run-time dependency json-c found: YES 0.17 00:07:48.713 Run-time dependency cmocka found: YES 1.1.7 00:07:48.713 Program pytest-3 found: NO 00:07:48.713 Program flake8 found: NO 00:07:48.713 Program misspell-fixer found: NO 00:07:48.713 Program restructuredtext-lint found: NO 00:07:48.713 Program valgrind found: YES (/usr/bin/valgrind) 00:07:48.713 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:48.713 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:48.713 Compiler for C supports arguments -Wwrite-strings: YES 00:07:48.713 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:48.713 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:48.713 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:48.713 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:48.713 Build targets in project: 8 00:07:48.713 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:48.713 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:48.713 00:07:48.713 libvfio-user 0.0.1 00:07:48.713 00:07:48.713 User defined options 00:07:48.713 buildtype : debug 00:07:48.713 default_library: shared 00:07:48.713 libdir : /usr/local/lib 00:07:48.713 00:07:48.713 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:48.972 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:48.972 [1/37] Compiling C object samples/null.p/null.c.o 00:07:49.231 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:49.231 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:07:49.231 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:49.231 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:07:49.232 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:49.232 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:07:49.232 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:49.232 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:07:49.232 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:07:49.232 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:49.232 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:49.232 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:49.232 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:49.232 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:49.232 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:49.232 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:07:49.232 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:07:49.232 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:49.232 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:07:49.232 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:49.232 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:49.232 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:07:49.232 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:49.232 [25/37] Compiling C object samples/server.p/server.c.o 00:07:49.232 [26/37] Compiling C object samples/client.p/client.c.o 00:07:49.232 [27/37] Linking target samples/client 00:07:49.232 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:49.232 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:07:49.232 [30/37] Linking target test/unit_tests 00:07:49.232 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:07:49.491 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:07:49.491 [33/37] Linking target samples/null 00:07:49.491 [34/37] Linking target samples/server 00:07:49.491 [35/37] Linking target samples/lspci 00:07:49.491 [36/37] Linking target samples/gpio-pci-idio-16 00:07:49.491 [37/37] Linking target samples/shadow_ioeventfd_server 00:07:49.491 INFO: autodetecting backend as ninja 00:07:49.491 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:49.491 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:50.060 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:50.060 ninja: no work to do. 00:07:55.360 The Meson build system 00:07:55.360 Version: 1.5.0 00:07:55.360 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:07:55.360 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:07:55.360 Build type: native build 00:07:55.360 Program cat found: YES (/usr/bin/cat) 00:07:55.360 Project name: DPDK 00:07:55.360 Project version: 24.03.0 00:07:55.360 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:55.360 C linker for the host machine: cc ld.bfd 2.40-14 00:07:55.360 Host machine cpu family: x86_64 00:07:55.360 Host machine cpu: x86_64 00:07:55.360 Message: ## Building in Developer Mode ## 00:07:55.360 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:55.360 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:55.360 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:55.360 Program python3 found: YES (/usr/bin/python3) 00:07:55.360 Program cat found: YES (/usr/bin/cat) 00:07:55.360 Compiler for C supports arguments -march=native: YES 00:07:55.360 Checking for size of "void *" : 8 00:07:55.360 Checking for size of "void *" : 8 (cached) 00:07:55.360 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:55.360 Library m found: YES 00:07:55.360 Library numa found: YES 00:07:55.360 Has header "numaif.h" : YES 00:07:55.360 Library fdt found: NO 00:07:55.360 Library execinfo found: NO 00:07:55.360 Has header "execinfo.h" : YES 00:07:55.360 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:55.360 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:55.360 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:55.360 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:55.360 Run-time dependency openssl found: YES 3.1.1 00:07:55.360 Run-time dependency libpcap found: YES 1.10.4 00:07:55.360 Has header "pcap.h" with dependency libpcap: YES 00:07:55.360 Compiler for C supports arguments -Wcast-qual: YES 00:07:55.360 Compiler for C supports arguments -Wdeprecated: YES 00:07:55.360 Compiler for C supports arguments -Wformat: YES 00:07:55.360 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:55.360 Compiler for C supports arguments -Wformat-security: NO 00:07:55.360 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:55.360 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:55.360 Compiler for C supports arguments -Wnested-externs: YES 00:07:55.360 Compiler for C supports arguments -Wold-style-definition: YES 00:07:55.360 Compiler for C supports arguments -Wpointer-arith: YES 00:07:55.360 Compiler for C supports arguments -Wsign-compare: YES 00:07:55.360 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:55.360 Compiler for C supports arguments -Wundef: YES 00:07:55.360 Compiler for C supports arguments -Wwrite-strings: YES 00:07:55.360 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:55.360 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:55.360 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:55.360 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:55.360 Program objdump found: YES (/usr/bin/objdump) 00:07:55.360 Compiler for C supports arguments -mavx512f: YES 00:07:55.360 Checking if "AVX512 checking" compiles: YES 00:07:55.360 Fetching value of define "__SSE4_2__" : 1 00:07:55.360 Fetching value of define "__AES__" : 1 00:07:55.360 Fetching value of define "__AVX__" : 1 00:07:55.360 Fetching value of define "__AVX2__" : 1 00:07:55.360 Fetching value of define "__AVX512BW__" : 1 00:07:55.360 Fetching value of define "__AVX512CD__" : 1 00:07:55.360 Fetching value of define "__AVX512DQ__" : 1 00:07:55.360 Fetching value of define "__AVX512F__" : 1 00:07:55.360 Fetching value of define "__AVX512VL__" : 1 00:07:55.360 Fetching value of define "__PCLMUL__" : 1 00:07:55.360 Fetching value of define "__RDRND__" : 1 00:07:55.360 Fetching value of define "__RDSEED__" : 1 00:07:55.360 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:55.360 Fetching value of define "__znver1__" : (undefined) 00:07:55.360 Fetching value of define "__znver2__" : (undefined) 00:07:55.360 Fetching value of define "__znver3__" : (undefined) 00:07:55.360 Fetching value of define "__znver4__" : (undefined) 00:07:55.360 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:55.360 Message: lib/log: Defining dependency "log" 00:07:55.360 Message: lib/kvargs: Defining dependency "kvargs" 00:07:55.360 Message: lib/telemetry: Defining dependency "telemetry" 00:07:55.360 Checking for function "getentropy" : NO 00:07:55.360 Message: lib/eal: Defining dependency "eal" 00:07:55.360 Message: lib/ring: Defining dependency "ring" 00:07:55.360 Message: lib/rcu: Defining dependency "rcu" 00:07:55.360 Message: lib/mempool: Defining dependency "mempool" 00:07:55.360 Message: lib/mbuf: Defining dependency "mbuf" 00:07:55.360 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:55.360 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:55.360 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:55.360 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:55.360 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:55.360 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:55.360 Compiler for C supports arguments -mpclmul: YES 00:07:55.360 Compiler for C supports arguments -maes: YES 00:07:55.360 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:55.360 Compiler for C supports arguments -mavx512bw: YES 00:07:55.360 Compiler for C supports arguments -mavx512dq: YES 00:07:55.360 Compiler for C supports arguments -mavx512vl: YES 00:07:55.360 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:55.360 Compiler for C supports arguments -mavx2: YES 00:07:55.360 Compiler for C supports arguments -mavx: YES 00:07:55.360 Message: lib/net: Defining dependency "net" 00:07:55.360 Message: lib/meter: Defining dependency "meter" 00:07:55.360 Message: lib/ethdev: Defining dependency "ethdev" 00:07:55.360 Message: lib/pci: Defining dependency "pci" 00:07:55.360 Message: lib/cmdline: Defining dependency "cmdline" 00:07:55.360 Message: lib/hash: Defining dependency "hash" 00:07:55.360 Message: lib/timer: Defining dependency "timer" 00:07:55.360 Message: lib/compressdev: Defining dependency "compressdev" 00:07:55.360 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:55.360 Message: lib/dmadev: Defining dependency "dmadev" 00:07:55.360 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:55.360 Message: lib/power: Defining dependency "power" 00:07:55.360 Message: lib/reorder: Defining dependency "reorder" 00:07:55.360 Message: lib/security: Defining dependency "security" 00:07:55.360 Has header "linux/userfaultfd.h" : YES 00:07:55.360 Has header "linux/vduse.h" : YES 00:07:55.360 Message: lib/vhost: Defining dependency "vhost" 00:07:55.360 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:55.360 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:55.360 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:55.360 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:55.360 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:55.360 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:55.360 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:55.360 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:55.360 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:55.360 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:55.360 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:55.360 Configuring doxy-api-html.conf using configuration 00:07:55.360 Configuring doxy-api-man.conf using configuration 00:07:55.360 Program mandb found: YES (/usr/bin/mandb) 00:07:55.360 Program sphinx-build found: NO 00:07:55.360 Configuring rte_build_config.h using configuration 00:07:55.360 Message: 00:07:55.360 ================= 00:07:55.360 Applications Enabled 00:07:55.360 ================= 00:07:55.360 00:07:55.360 apps: 00:07:55.360 00:07:55.360 00:07:55.360 Message: 00:07:55.360 ================= 00:07:55.360 Libraries Enabled 00:07:55.360 ================= 00:07:55.360 00:07:55.360 libs: 00:07:55.360 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:55.360 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:55.360 cryptodev, dmadev, power, reorder, security, vhost, 00:07:55.360 00:07:55.360 Message: 00:07:55.360 =============== 00:07:55.360 Drivers Enabled 00:07:55.360 =============== 00:07:55.360 00:07:55.360 common: 00:07:55.360 00:07:55.360 bus: 00:07:55.360 pci, vdev, 00:07:55.360 mempool: 00:07:55.360 ring, 00:07:55.360 dma: 00:07:55.360 00:07:55.360 net: 00:07:55.360 00:07:55.360 crypto: 00:07:55.361 00:07:55.361 compress: 00:07:55.361 00:07:55.361 vdpa: 00:07:55.361 00:07:55.361 00:07:55.361 Message: 00:07:55.361 ================= 00:07:55.361 Content Skipped 00:07:55.361 ================= 00:07:55.361 00:07:55.361 apps: 00:07:55.361 dumpcap: explicitly disabled via build config 00:07:55.361 graph: explicitly disabled via build config 00:07:55.361 pdump: explicitly disabled via build config 00:07:55.361 proc-info: explicitly disabled via build config 00:07:55.361 test-acl: explicitly disabled via build config 00:07:55.361 test-bbdev: explicitly disabled via build config 00:07:55.361 test-cmdline: explicitly disabled via build config 00:07:55.361 test-compress-perf: explicitly disabled via build config 00:07:55.361 test-crypto-perf: explicitly disabled via build config 00:07:55.361 test-dma-perf: explicitly disabled via build config 00:07:55.361 test-eventdev: explicitly disabled via build config 00:07:55.361 test-fib: explicitly disabled via build config 00:07:55.361 test-flow-perf: explicitly disabled via build config 00:07:55.361 test-gpudev: explicitly disabled via build config 00:07:55.361 test-mldev: explicitly disabled via build config 00:07:55.361 test-pipeline: explicitly disabled via build config 00:07:55.361 test-pmd: explicitly disabled via build config 00:07:55.361 test-regex: explicitly disabled via build config 00:07:55.361 test-sad: explicitly disabled via build config 00:07:55.361 test-security-perf: explicitly disabled via build config 00:07:55.361 00:07:55.361 libs: 00:07:55.361 argparse: explicitly disabled via build config 00:07:55.361 metrics: explicitly disabled via build config 00:07:55.361 acl: explicitly disabled via build config 00:07:55.361 bbdev: explicitly disabled via build config 00:07:55.361 bitratestats: explicitly disabled via build config 00:07:55.361 bpf: explicitly disabled via build config 00:07:55.361 cfgfile: explicitly disabled via build config 00:07:55.361 distributor: explicitly disabled via build config 00:07:55.361 efd: explicitly disabled via build config 00:07:55.361 eventdev: explicitly disabled via build config 00:07:55.361 dispatcher: explicitly disabled via build config 00:07:55.361 gpudev: explicitly disabled via build config 00:07:55.361 gro: explicitly disabled via build config 00:07:55.361 gso: explicitly disabled via build config 00:07:55.361 ip_frag: explicitly disabled via build config 00:07:55.361 jobstats: explicitly disabled via build config 00:07:55.361 latencystats: explicitly disabled via build config 00:07:55.361 lpm: explicitly disabled via build config 00:07:55.361 member: explicitly disabled via build config 00:07:55.361 pcapng: explicitly disabled via build config 00:07:55.361 rawdev: explicitly disabled via build config 00:07:55.361 regexdev: explicitly disabled via build config 00:07:55.361 mldev: explicitly disabled via build config 00:07:55.361 rib: explicitly disabled via build config 00:07:55.361 sched: explicitly disabled via build config 00:07:55.361 stack: explicitly disabled via build config 00:07:55.361 ipsec: explicitly disabled via build config 00:07:55.361 pdcp: explicitly disabled via build config 00:07:55.361 fib: explicitly disabled via build config 00:07:55.361 port: explicitly disabled via build config 00:07:55.361 pdump: explicitly disabled via build config 00:07:55.361 table: explicitly disabled via build config 00:07:55.361 pipeline: explicitly disabled via build config 00:07:55.361 graph: explicitly disabled via build config 00:07:55.361 node: explicitly disabled via build config 00:07:55.361 00:07:55.361 drivers: 00:07:55.361 common/cpt: not in enabled drivers build config 00:07:55.361 common/dpaax: not in enabled drivers build config 00:07:55.361 common/iavf: not in enabled drivers build config 00:07:55.361 common/idpf: not in enabled drivers build config 00:07:55.361 common/ionic: not in enabled drivers build config 00:07:55.361 common/mvep: not in enabled drivers build config 00:07:55.361 common/octeontx: not in enabled drivers build config 00:07:55.361 bus/auxiliary: not in enabled drivers build config 00:07:55.361 bus/cdx: not in enabled drivers build config 00:07:55.361 bus/dpaa: not in enabled drivers build config 00:07:55.361 bus/fslmc: not in enabled drivers build config 00:07:55.361 bus/ifpga: not in enabled drivers build config 00:07:55.361 bus/platform: not in enabled drivers build config 00:07:55.361 bus/uacce: not in enabled drivers build config 00:07:55.361 bus/vmbus: not in enabled drivers build config 00:07:55.361 common/cnxk: not in enabled drivers build config 00:07:55.361 common/mlx5: not in enabled drivers build config 00:07:55.361 common/nfp: not in enabled drivers build config 00:07:55.361 common/nitrox: not in enabled drivers build config 00:07:55.361 common/qat: not in enabled drivers build config 00:07:55.361 common/sfc_efx: not in enabled drivers build config 00:07:55.361 mempool/bucket: not in enabled drivers build config 00:07:55.361 mempool/cnxk: not in enabled drivers build config 00:07:55.361 mempool/dpaa: not in enabled drivers build config 00:07:55.361 mempool/dpaa2: not in enabled drivers build config 00:07:55.361 mempool/octeontx: not in enabled drivers build config 00:07:55.361 mempool/stack: not in enabled drivers build config 00:07:55.361 dma/cnxk: not in enabled drivers build config 00:07:55.361 dma/dpaa: not in enabled drivers build config 00:07:55.361 dma/dpaa2: not in enabled drivers build config 00:07:55.361 dma/hisilicon: not in enabled drivers build config 00:07:55.361 dma/idxd: not in enabled drivers build config 00:07:55.361 dma/ioat: not in enabled drivers build config 00:07:55.361 dma/skeleton: not in enabled drivers build config 00:07:55.361 net/af_packet: not in enabled drivers build config 00:07:55.361 net/af_xdp: not in enabled drivers build config 00:07:55.361 net/ark: not in enabled drivers build config 00:07:55.361 net/atlantic: not in enabled drivers build config 00:07:55.361 net/avp: not in enabled drivers build config 00:07:55.361 net/axgbe: not in enabled drivers build config 00:07:55.361 net/bnx2x: not in enabled drivers build config 00:07:55.361 net/bnxt: not in enabled drivers build config 00:07:55.361 net/bonding: not in enabled drivers build config 00:07:55.361 net/cnxk: not in enabled drivers build config 00:07:55.361 net/cpfl: not in enabled drivers build config 00:07:55.361 net/cxgbe: not in enabled drivers build config 00:07:55.361 net/dpaa: not in enabled drivers build config 00:07:55.361 net/dpaa2: not in enabled drivers build config 00:07:55.361 net/e1000: not in enabled drivers build config 00:07:55.361 net/ena: not in enabled drivers build config 00:07:55.361 net/enetc: not in enabled drivers build config 00:07:55.361 net/enetfec: not in enabled drivers build config 00:07:55.361 net/enic: not in enabled drivers build config 00:07:55.361 net/failsafe: not in enabled drivers build config 00:07:55.361 net/fm10k: not in enabled drivers build config 00:07:55.361 net/gve: not in enabled drivers build config 00:07:55.361 net/hinic: not in enabled drivers build config 00:07:55.361 net/hns3: not in enabled drivers build config 00:07:55.361 net/i40e: not in enabled drivers build config 00:07:55.361 net/iavf: not in enabled drivers build config 00:07:55.361 net/ice: not in enabled drivers build config 00:07:55.361 net/idpf: not in enabled drivers build config 00:07:55.361 net/igc: not in enabled drivers build config 00:07:55.361 net/ionic: not in enabled drivers build config 00:07:55.361 net/ipn3ke: not in enabled drivers build config 00:07:55.361 net/ixgbe: not in enabled drivers build config 00:07:55.361 net/mana: not in enabled drivers build config 00:07:55.361 net/memif: not in enabled drivers build config 00:07:55.361 net/mlx4: not in enabled drivers build config 00:07:55.361 net/mlx5: not in enabled drivers build config 00:07:55.361 net/mvneta: not in enabled drivers build config 00:07:55.361 net/mvpp2: not in enabled drivers build config 00:07:55.361 net/netvsc: not in enabled drivers build config 00:07:55.361 net/nfb: not in enabled drivers build config 00:07:55.361 net/nfp: not in enabled drivers build config 00:07:55.361 net/ngbe: not in enabled drivers build config 00:07:55.361 net/null: not in enabled drivers build config 00:07:55.361 net/octeontx: not in enabled drivers build config 00:07:55.361 net/octeon_ep: not in enabled drivers build config 00:07:55.361 net/pcap: not in enabled drivers build config 00:07:55.361 net/pfe: not in enabled drivers build config 00:07:55.361 net/qede: not in enabled drivers build config 00:07:55.361 net/ring: not in enabled drivers build config 00:07:55.361 net/sfc: not in enabled drivers build config 00:07:55.361 net/softnic: not in enabled drivers build config 00:07:55.361 net/tap: not in enabled drivers build config 00:07:55.361 net/thunderx: not in enabled drivers build config 00:07:55.361 net/txgbe: not in enabled drivers build config 00:07:55.361 net/vdev_netvsc: not in enabled drivers build config 00:07:55.361 net/vhost: not in enabled drivers build config 00:07:55.361 net/virtio: not in enabled drivers build config 00:07:55.361 net/vmxnet3: not in enabled drivers build config 00:07:55.361 raw/*: missing internal dependency, "rawdev" 00:07:55.361 crypto/armv8: not in enabled drivers build config 00:07:55.361 crypto/bcmfs: not in enabled drivers build config 00:07:55.361 crypto/caam_jr: not in enabled drivers build config 00:07:55.361 crypto/ccp: not in enabled drivers build config 00:07:55.361 crypto/cnxk: not in enabled drivers build config 00:07:55.361 crypto/dpaa_sec: not in enabled drivers build config 00:07:55.361 crypto/dpaa2_sec: not in enabled drivers build config 00:07:55.361 crypto/ipsec_mb: not in enabled drivers build config 00:07:55.361 crypto/mlx5: not in enabled drivers build config 00:07:55.361 crypto/mvsam: not in enabled drivers build config 00:07:55.361 crypto/nitrox: not in enabled drivers build config 00:07:55.361 crypto/null: not in enabled drivers build config 00:07:55.362 crypto/octeontx: not in enabled drivers build config 00:07:55.362 crypto/openssl: not in enabled drivers build config 00:07:55.362 crypto/scheduler: not in enabled drivers build config 00:07:55.362 crypto/uadk: not in enabled drivers build config 00:07:55.362 crypto/virtio: not in enabled drivers build config 00:07:55.362 compress/isal: not in enabled drivers build config 00:07:55.362 compress/mlx5: not in enabled drivers build config 00:07:55.362 compress/nitrox: not in enabled drivers build config 00:07:55.362 compress/octeontx: not in enabled drivers build config 00:07:55.362 compress/zlib: not in enabled drivers build config 00:07:55.362 regex/*: missing internal dependency, "regexdev" 00:07:55.362 ml/*: missing internal dependency, "mldev" 00:07:55.362 vdpa/ifc: not in enabled drivers build config 00:07:55.362 vdpa/mlx5: not in enabled drivers build config 00:07:55.362 vdpa/nfp: not in enabled drivers build config 00:07:55.362 vdpa/sfc: not in enabled drivers build config 00:07:55.362 event/*: missing internal dependency, "eventdev" 00:07:55.362 baseband/*: missing internal dependency, "bbdev" 00:07:55.362 gpu/*: missing internal dependency, "gpudev" 00:07:55.362 00:07:55.362 00:07:55.362 Build targets in project: 85 00:07:55.362 00:07:55.362 DPDK 24.03.0 00:07:55.362 00:07:55.362 User defined options 00:07:55.362 buildtype : debug 00:07:55.362 default_library : shared 00:07:55.362 libdir : lib 00:07:55.362 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:55.362 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:55.362 c_link_args : 00:07:55.362 cpu_instruction_set: native 00:07:55.362 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:07:55.362 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:07:55.362 enable_docs : false 00:07:55.362 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:55.362 enable_kmods : false 00:07:55.362 max_lcores : 128 00:07:55.362 tests : false 00:07:55.362 00:07:55.362 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:55.362 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:07:55.362 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:55.362 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:55.362 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:55.631 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:55.631 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:55.631 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:55.631 [7/268] Linking static target lib/librte_kvargs.a 00:07:55.631 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:55.631 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:55.631 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:55.631 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:55.631 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:55.631 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:55.631 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:55.631 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:55.631 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:55.631 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:55.631 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:55.631 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:55.631 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:55.631 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:55.631 [22/268] Linking static target lib/librte_log.a 00:07:55.631 [23/268] Linking static target lib/librte_pci.a 00:07:55.631 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:55.631 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:55.631 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:55.631 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:55.631 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:55.631 [29/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:55.631 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:55.631 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:55.895 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:55.896 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:55.896 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:55.896 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:55.896 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:55.896 [37/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:55.896 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:55.896 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:55.896 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:55.896 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:55.896 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:55.896 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:55.896 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:55.896 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:55.896 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:55.896 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:56.156 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:56.156 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:56.156 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:56.156 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:56.156 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:56.156 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:56.156 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:56.156 [55/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:56.156 [56/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.156 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:56.156 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:56.156 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:56.156 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:56.156 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:56.156 [62/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:56.156 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:56.156 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:56.156 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:56.156 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:56.156 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:56.156 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:56.156 [69/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:56.156 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:56.156 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:56.156 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:56.156 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:56.156 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:56.156 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:56.156 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:56.156 [77/268] Linking static target lib/librte_meter.a 00:07:56.156 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:56.156 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:56.156 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:56.156 [81/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:56.156 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:56.156 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:56.157 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:56.157 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:56.157 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:56.157 [87/268] Linking static target lib/librte_telemetry.a 00:07:56.157 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:56.157 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:56.157 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:56.157 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:56.157 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:56.157 [93/268] Linking static target lib/librte_ring.a 00:07:56.157 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:56.157 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:56.157 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:56.157 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:56.157 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:56.157 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:56.157 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:56.157 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:56.157 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:56.157 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:56.157 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:56.157 [105/268] Linking static target lib/librte_cmdline.a 00:07:56.157 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.157 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:56.157 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:56.157 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:56.157 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:56.157 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:56.157 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:56.157 [113/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:56.157 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:56.157 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:56.157 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:56.157 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:56.157 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:56.157 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:56.157 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:56.157 [121/268] Linking static target lib/librte_net.a 00:07:56.157 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:56.157 [123/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:56.157 [124/268] Linking static target lib/librte_mempool.a 00:07:56.157 [125/268] Linking static target lib/librte_timer.a 00:07:56.157 [126/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:56.157 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:56.157 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:56.157 [129/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:56.157 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:56.157 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:56.157 [132/268] Linking static target lib/librte_rcu.a 00:07:56.157 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:56.157 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:56.157 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:56.157 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:56.157 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:56.157 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:56.157 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:56.157 [140/268] Linking static target lib/librte_eal.a 00:07:56.157 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:56.157 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:56.157 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:56.418 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:56.418 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:56.418 [146/268] Linking static target lib/librte_compressdev.a 00:07:56.418 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:56.418 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:56.418 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:56.418 [150/268] Linking static target lib/librte_dmadev.a 00:07:56.418 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:56.418 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:56.418 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.418 [154/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.418 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:56.418 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:56.418 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:56.418 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:56.418 [159/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:56.418 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:56.418 [161/268] Linking static target lib/librte_hash.a 00:07:56.418 [162/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:56.418 [163/268] Linking target lib/librte_log.so.24.1 00:07:56.418 [164/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.418 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:56.418 [166/268] Linking static target lib/librte_mbuf.a 00:07:56.418 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:56.418 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:56.418 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:56.418 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:56.418 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:56.418 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:56.418 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:56.418 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.418 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:56.418 [176/268] Linking static target lib/librte_reorder.a 00:07:56.418 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:56.418 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:56.418 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:56.678 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:56.678 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:56.678 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:56.678 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:56.678 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:56.678 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:56.678 [186/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:56.678 [187/268] Linking static target lib/librte_power.a 00:07:56.678 [188/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.678 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:56.678 [190/268] Linking target lib/librte_kvargs.so.24.1 00:07:56.678 [191/268] Linking static target lib/librte_security.a 00:07:56.678 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:56.678 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:56.678 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.678 [195/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:56.678 [196/268] Linking static target lib/librte_cryptodev.a 00:07:56.678 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.678 [198/268] Linking target lib/librte_telemetry.so.24.1 00:07:56.678 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:56.678 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:56.678 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:56.678 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:56.678 [203/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:56.678 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:56.678 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:56.678 [206/268] Linking static target drivers/librte_bus_vdev.a 00:07:56.678 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:56.678 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:56.678 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:56.678 [210/268] Linking static target drivers/librte_mempool_ring.a 00:07:56.938 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:56.938 [212/268] Linking static target drivers/librte_bus_pci.a 00:07:56.938 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:56.938 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.938 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:56.938 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.938 [217/268] Linking static target lib/librte_ethdev.a 00:07:57.197 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.197 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.197 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.197 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.457 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:57.457 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.457 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.457 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.716 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.716 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.286 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:58.286 [229/268] Linking static target lib/librte_vhost.a 00:07:58.857 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:00.769 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.346 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.256 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.256 [234/268] Linking target lib/librte_eal.so.24.1 00:08:09.256 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:09.256 [236/268] Linking target lib/librte_ring.so.24.1 00:08:09.257 [237/268] Linking target lib/librte_meter.so.24.1 00:08:09.257 [238/268] Linking target lib/librte_timer.so.24.1 00:08:09.257 [239/268] Linking target lib/librte_pci.so.24.1 00:08:09.257 [240/268] Linking target lib/librte_dmadev.so.24.1 00:08:09.257 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:09.517 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:09.517 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:09.517 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:09.517 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:09.517 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:09.517 [247/268] Linking target lib/librte_rcu.so.24.1 00:08:09.517 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:09.517 [249/268] Linking target lib/librte_mempool.so.24.1 00:08:09.777 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:09.777 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:09.777 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:09.777 [253/268] Linking target lib/librte_mbuf.so.24.1 00:08:09.777 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:10.036 [255/268] Linking target lib/librte_reorder.so.24.1 00:08:10.036 [256/268] Linking target lib/librte_compressdev.so.24.1 00:08:10.036 [257/268] Linking target lib/librte_net.so.24.1 00:08:10.036 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:08:10.036 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:10.036 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:10.036 [261/268] Linking target lib/librte_security.so.24.1 00:08:10.036 [262/268] Linking target lib/librte_hash.so.24.1 00:08:10.036 [263/268] Linking target lib/librte_cmdline.so.24.1 00:08:10.036 [264/268] Linking target lib/librte_ethdev.so.24.1 00:08:10.295 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:10.295 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:10.295 [267/268] Linking target lib/librte_power.so.24.1 00:08:10.295 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:10.295 INFO: autodetecting backend as ninja 00:08:10.295 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:08:16.876 CC lib/log/log.o 00:08:16.876 CC lib/log/log_flags.o 00:08:16.876 CC lib/log/log_deprecated.o 00:08:16.876 CC lib/ut/ut.o 00:08:16.876 CC lib/ut_mock/mock.o 00:08:16.876 LIB libspdk_log.a 00:08:16.876 LIB libspdk_ut.a 00:08:16.876 LIB libspdk_ut_mock.a 00:08:16.876 SO libspdk_log.so.7.1 00:08:16.876 SO libspdk_ut.so.2.0 00:08:16.876 SO libspdk_ut_mock.so.6.0 00:08:16.876 SYMLINK libspdk_log.so 00:08:16.876 SYMLINK libspdk_ut.so 00:08:16.876 SYMLINK libspdk_ut_mock.so 00:08:17.136 CC lib/util/base64.o 00:08:17.136 CC lib/util/cpuset.o 00:08:17.136 CC lib/util/bit_array.o 00:08:17.136 CC lib/util/crc16.o 00:08:17.136 CC lib/util/crc32.o 00:08:17.136 CC lib/util/crc32c.o 00:08:17.136 CC lib/util/crc32_ieee.o 00:08:17.136 CC lib/util/crc64.o 00:08:17.136 CC lib/util/dif.o 00:08:17.136 CC lib/dma/dma.o 00:08:17.136 CC lib/util/fd.o 00:08:17.136 CC lib/util/fd_group.o 00:08:17.136 CC lib/util/file.o 00:08:17.136 CC lib/util/hexlify.o 00:08:17.136 CXX lib/trace_parser/trace.o 00:08:17.136 CC lib/util/math.o 00:08:17.136 CC lib/util/iov.o 00:08:17.136 CC lib/util/net.o 00:08:17.136 CC lib/util/pipe.o 00:08:17.136 CC lib/ioat/ioat.o 00:08:17.136 CC lib/util/strerror_tls.o 00:08:17.136 CC lib/util/string.o 00:08:17.136 CC lib/util/uuid.o 00:08:17.136 CC lib/util/xor.o 00:08:17.136 CC lib/util/zipf.o 00:08:17.136 CC lib/util/md5.o 00:08:17.395 CC lib/vfio_user/host/vfio_user.o 00:08:17.395 CC lib/vfio_user/host/vfio_user_pci.o 00:08:17.395 LIB libspdk_dma.a 00:08:17.395 SO libspdk_dma.so.5.0 00:08:17.395 LIB libspdk_ioat.a 00:08:17.395 SYMLINK libspdk_dma.so 00:08:17.395 SO libspdk_ioat.so.7.0 00:08:17.395 SYMLINK libspdk_ioat.so 00:08:17.395 LIB libspdk_vfio_user.a 00:08:17.655 SO libspdk_vfio_user.so.5.0 00:08:17.655 LIB libspdk_util.a 00:08:17.655 SYMLINK libspdk_vfio_user.so 00:08:17.655 SO libspdk_util.so.10.1 00:08:17.916 SYMLINK libspdk_util.so 00:08:18.176 CC lib/conf/conf.o 00:08:18.176 CC lib/json/json_parse.o 00:08:18.176 CC lib/rdma_utils/rdma_utils.o 00:08:18.176 CC lib/json/json_util.o 00:08:18.176 CC lib/json/json_write.o 00:08:18.176 CC lib/env_dpdk/env.o 00:08:18.176 CC lib/env_dpdk/memory.o 00:08:18.176 CC lib/env_dpdk/pci.o 00:08:18.176 CC lib/env_dpdk/init.o 00:08:18.176 CC lib/env_dpdk/threads.o 00:08:18.176 CC lib/vmd/vmd.o 00:08:18.176 CC lib/vmd/led.o 00:08:18.176 CC lib/env_dpdk/pci_ioat.o 00:08:18.176 CC lib/env_dpdk/pci_virtio.o 00:08:18.176 CC lib/idxd/idxd.o 00:08:18.176 CC lib/env_dpdk/pci_vmd.o 00:08:18.176 CC lib/idxd/idxd_user.o 00:08:18.176 CC lib/env_dpdk/pci_idxd.o 00:08:18.176 CC lib/idxd/idxd_kernel.o 00:08:18.176 CC lib/env_dpdk/pci_event.o 00:08:18.176 CC lib/env_dpdk/sigbus_handler.o 00:08:18.176 CC lib/env_dpdk/pci_dpdk.o 00:08:18.176 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:18.176 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:18.436 LIB libspdk_conf.a 00:08:18.436 SO libspdk_conf.so.6.0 00:08:18.436 LIB libspdk_json.a 00:08:18.436 LIB libspdk_rdma_utils.a 00:08:18.436 SO libspdk_json.so.6.0 00:08:18.436 SYMLINK libspdk_conf.so 00:08:18.436 SO libspdk_rdma_utils.so.1.0 00:08:18.436 SYMLINK libspdk_json.so 00:08:18.436 SYMLINK libspdk_rdma_utils.so 00:08:18.696 LIB libspdk_trace_parser.a 00:08:18.696 SO libspdk_trace_parser.so.6.0 00:08:18.696 LIB libspdk_idxd.a 00:08:18.696 LIB libspdk_vmd.a 00:08:18.696 SO libspdk_idxd.so.12.1 00:08:18.696 SYMLINK libspdk_trace_parser.so 00:08:18.696 SO libspdk_vmd.so.6.0 00:08:18.696 SYMLINK libspdk_idxd.so 00:08:18.696 SYMLINK libspdk_vmd.so 00:08:18.956 CC lib/jsonrpc/jsonrpc_server.o 00:08:18.956 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:18.956 CC lib/jsonrpc/jsonrpc_client.o 00:08:18.956 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:18.956 CC lib/rdma_provider/common.o 00:08:18.956 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:18.956 LIB libspdk_rdma_provider.a 00:08:18.956 LIB libspdk_jsonrpc.a 00:08:19.217 SO libspdk_rdma_provider.so.7.0 00:08:19.217 SO libspdk_jsonrpc.so.6.0 00:08:19.217 SYMLINK libspdk_rdma_provider.so 00:08:19.217 LIB libspdk_env_dpdk.a 00:08:19.217 SYMLINK libspdk_jsonrpc.so 00:08:19.217 SO libspdk_env_dpdk.so.15.1 00:08:19.477 SYMLINK libspdk_env_dpdk.so 00:08:19.477 CC lib/rpc/rpc.o 00:08:19.738 LIB libspdk_rpc.a 00:08:19.738 SO libspdk_rpc.so.6.0 00:08:19.738 SYMLINK libspdk_rpc.so 00:08:20.309 CC lib/trace/trace.o 00:08:20.309 CC lib/trace/trace_flags.o 00:08:20.309 CC lib/trace/trace_rpc.o 00:08:20.309 CC lib/keyring/keyring.o 00:08:20.309 CC lib/notify/notify.o 00:08:20.309 CC lib/keyring/keyring_rpc.o 00:08:20.309 CC lib/notify/notify_rpc.o 00:08:20.309 LIB libspdk_notify.a 00:08:20.309 SO libspdk_notify.so.6.0 00:08:20.309 LIB libspdk_keyring.a 00:08:20.309 LIB libspdk_trace.a 00:08:20.569 SO libspdk_keyring.so.2.0 00:08:20.569 SO libspdk_trace.so.11.0 00:08:20.569 SYMLINK libspdk_notify.so 00:08:20.569 SYMLINK libspdk_keyring.so 00:08:20.569 SYMLINK libspdk_trace.so 00:08:20.829 CC lib/thread/thread.o 00:08:20.829 CC lib/sock/sock.o 00:08:20.829 CC lib/thread/iobuf.o 00:08:20.829 CC lib/sock/sock_rpc.o 00:08:21.089 LIB libspdk_sock.a 00:08:21.349 SO libspdk_sock.so.10.0 00:08:21.349 SYMLINK libspdk_sock.so 00:08:21.610 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:21.610 CC lib/nvme/nvme_ctrlr.o 00:08:21.610 CC lib/nvme/nvme_fabric.o 00:08:21.610 CC lib/nvme/nvme_ns_cmd.o 00:08:21.610 CC lib/nvme/nvme_ns.o 00:08:21.610 CC lib/nvme/nvme_pcie_common.o 00:08:21.610 CC lib/nvme/nvme_pcie.o 00:08:21.610 CC lib/nvme/nvme_qpair.o 00:08:21.610 CC lib/nvme/nvme.o 00:08:21.610 CC lib/nvme/nvme_quirks.o 00:08:21.610 CC lib/nvme/nvme_transport.o 00:08:21.610 CC lib/nvme/nvme_discovery.o 00:08:21.610 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:21.610 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:21.610 CC lib/nvme/nvme_tcp.o 00:08:21.610 CC lib/nvme/nvme_opal.o 00:08:21.610 CC lib/nvme/nvme_io_msg.o 00:08:21.610 CC lib/nvme/nvme_poll_group.o 00:08:21.610 CC lib/nvme/nvme_zns.o 00:08:21.610 CC lib/nvme/nvme_auth.o 00:08:21.610 CC lib/nvme/nvme_stubs.o 00:08:21.610 CC lib/nvme/nvme_cuse.o 00:08:21.610 CC lib/nvme/nvme_vfio_user.o 00:08:21.610 CC lib/nvme/nvme_rdma.o 00:08:21.870 LIB libspdk_thread.a 00:08:22.131 SO libspdk_thread.so.11.0 00:08:22.131 SYMLINK libspdk_thread.so 00:08:22.391 CC lib/fsdev/fsdev_io.o 00:08:22.391 CC lib/fsdev/fsdev.o 00:08:22.391 CC lib/fsdev/fsdev_rpc.o 00:08:22.391 CC lib/virtio/virtio_vhost_user.o 00:08:22.391 CC lib/virtio/virtio.o 00:08:22.391 CC lib/virtio/virtio_pci.o 00:08:22.391 CC lib/virtio/virtio_vfio_user.o 00:08:22.391 CC lib/init/subsystem_rpc.o 00:08:22.391 CC lib/init/json_config.o 00:08:22.391 CC lib/init/subsystem.o 00:08:22.391 CC lib/init/rpc.o 00:08:22.391 CC lib/accel/accel.o 00:08:22.391 CC lib/vfu_tgt/tgt_endpoint.o 00:08:22.391 CC lib/accel/accel_rpc.o 00:08:22.391 CC lib/vfu_tgt/tgt_rpc.o 00:08:22.391 CC lib/accel/accel_sw.o 00:08:22.391 CC lib/blob/blobstore.o 00:08:22.391 CC lib/blob/request.o 00:08:22.391 CC lib/blob/zeroes.o 00:08:22.391 CC lib/blob/blob_bs_dev.o 00:08:22.652 LIB libspdk_init.a 00:08:22.652 SO libspdk_init.so.6.0 00:08:22.652 LIB libspdk_virtio.a 00:08:22.652 LIB libspdk_vfu_tgt.a 00:08:22.652 SYMLINK libspdk_init.so 00:08:22.652 SO libspdk_virtio.so.7.0 00:08:22.912 SO libspdk_vfu_tgt.so.3.0 00:08:22.912 SYMLINK libspdk_virtio.so 00:08:22.912 SYMLINK libspdk_vfu_tgt.so 00:08:22.912 LIB libspdk_fsdev.a 00:08:22.912 SO libspdk_fsdev.so.2.0 00:08:23.172 SYMLINK libspdk_fsdev.so 00:08:23.172 CC lib/event/app.o 00:08:23.172 CC lib/event/reactor.o 00:08:23.172 CC lib/event/log_rpc.o 00:08:23.172 CC lib/event/app_rpc.o 00:08:23.172 CC lib/event/scheduler_static.o 00:08:23.172 LIB libspdk_accel.a 00:08:23.172 SO libspdk_accel.so.16.0 00:08:23.432 LIB libspdk_nvme.a 00:08:23.432 SYMLINK libspdk_accel.so 00:08:23.432 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:23.432 SO libspdk_nvme.so.15.0 00:08:23.432 LIB libspdk_event.a 00:08:23.432 SO libspdk_event.so.14.0 00:08:23.692 SYMLINK libspdk_event.so 00:08:23.692 SYMLINK libspdk_nvme.so 00:08:23.692 CC lib/bdev/bdev.o 00:08:23.692 CC lib/bdev/bdev_rpc.o 00:08:23.692 CC lib/bdev/bdev_zone.o 00:08:23.692 CC lib/bdev/part.o 00:08:23.692 CC lib/bdev/scsi_nvme.o 00:08:23.952 LIB libspdk_fuse_dispatcher.a 00:08:23.952 SO libspdk_fuse_dispatcher.so.1.0 00:08:23.952 SYMLINK libspdk_fuse_dispatcher.so 00:08:24.522 LIB libspdk_blob.a 00:08:24.782 SO libspdk_blob.so.12.0 00:08:24.782 SYMLINK libspdk_blob.so 00:08:25.043 CC lib/blobfs/blobfs.o 00:08:25.043 CC lib/blobfs/tree.o 00:08:25.043 CC lib/lvol/lvol.o 00:08:25.612 LIB libspdk_bdev.a 00:08:25.612 SO libspdk_bdev.so.17.0 00:08:25.612 LIB libspdk_blobfs.a 00:08:25.873 SYMLINK libspdk_bdev.so 00:08:25.873 SO libspdk_blobfs.so.11.0 00:08:25.873 LIB libspdk_lvol.a 00:08:25.873 SYMLINK libspdk_blobfs.so 00:08:25.873 SO libspdk_lvol.so.11.0 00:08:25.873 SYMLINK libspdk_lvol.so 00:08:26.133 CC lib/nbd/nbd.o 00:08:26.133 CC lib/ublk/ublk.o 00:08:26.133 CC lib/nbd/nbd_rpc.o 00:08:26.133 CC lib/ublk/ublk_rpc.o 00:08:26.133 CC lib/nvmf/ctrlr.o 00:08:26.133 CC lib/scsi/dev.o 00:08:26.133 CC lib/nvmf/ctrlr_discovery.o 00:08:26.133 CC lib/scsi/lun.o 00:08:26.133 CC lib/ftl/ftl_core.o 00:08:26.133 CC lib/nvmf/ctrlr_bdev.o 00:08:26.133 CC lib/scsi/port.o 00:08:26.133 CC lib/nvmf/subsystem.o 00:08:26.133 CC lib/ftl/ftl_init.o 00:08:26.133 CC lib/scsi/scsi.o 00:08:26.133 CC lib/ftl/ftl_debug.o 00:08:26.134 CC lib/ftl/ftl_layout.o 00:08:26.134 CC lib/nvmf/nvmf.o 00:08:26.134 CC lib/scsi/scsi_bdev.o 00:08:26.134 CC lib/nvmf/nvmf_rpc.o 00:08:26.134 CC lib/ftl/ftl_io.o 00:08:26.134 CC lib/scsi/scsi_pr.o 00:08:26.134 CC lib/scsi/task.o 00:08:26.134 CC lib/ftl/ftl_sb.o 00:08:26.134 CC lib/nvmf/transport.o 00:08:26.134 CC lib/scsi/scsi_rpc.o 00:08:26.134 CC lib/ftl/ftl_l2p.o 00:08:26.134 CC lib/nvmf/tcp.o 00:08:26.134 CC lib/ftl/ftl_l2p_flat.o 00:08:26.134 CC lib/nvmf/stubs.o 00:08:26.134 CC lib/ftl/ftl_nv_cache.o 00:08:26.134 CC lib/nvmf/mdns_server.o 00:08:26.134 CC lib/ftl/ftl_band.o 00:08:26.134 CC lib/nvmf/vfio_user.o 00:08:26.134 CC lib/ftl/ftl_band_ops.o 00:08:26.134 CC lib/nvmf/rdma.o 00:08:26.134 CC lib/nvmf/auth.o 00:08:26.134 CC lib/ftl/ftl_writer.o 00:08:26.134 CC lib/ftl/ftl_l2p_cache.o 00:08:26.134 CC lib/ftl/ftl_rq.o 00:08:26.134 CC lib/ftl/ftl_reloc.o 00:08:26.134 CC lib/ftl/ftl_p2l.o 00:08:26.134 CC lib/ftl/ftl_p2l_log.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:26.134 CC lib/ftl/utils/ftl_conf.o 00:08:26.134 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:26.134 CC lib/ftl/utils/ftl_md.o 00:08:26.134 CC lib/ftl/utils/ftl_mempool.o 00:08:26.134 CC lib/ftl/utils/ftl_bitmap.o 00:08:26.134 CC lib/ftl/utils/ftl_property.o 00:08:26.134 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:26.134 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:26.134 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:26.134 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:26.134 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:26.134 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:26.134 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:26.134 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:26.134 CC lib/ftl/base/ftl_base_dev.o 00:08:26.134 CC lib/ftl/base/ftl_base_bdev.o 00:08:26.134 CC lib/ftl/ftl_trace.o 00:08:26.703 LIB libspdk_scsi.a 00:08:26.703 LIB libspdk_nbd.a 00:08:26.703 SO libspdk_scsi.so.9.0 00:08:26.703 SO libspdk_nbd.so.7.0 00:08:26.964 SYMLINK libspdk_nbd.so 00:08:26.964 SYMLINK libspdk_scsi.so 00:08:26.964 LIB libspdk_ublk.a 00:08:26.964 SO libspdk_ublk.so.3.0 00:08:26.964 SYMLINK libspdk_ublk.so 00:08:26.964 LIB libspdk_ftl.a 00:08:27.224 CC lib/vhost/vhost.o 00:08:27.224 CC lib/vhost/vhost_rpc.o 00:08:27.224 CC lib/vhost/vhost_scsi.o 00:08:27.224 CC lib/vhost/rte_vhost_user.o 00:08:27.224 CC lib/vhost/vhost_blk.o 00:08:27.224 CC lib/iscsi/conn.o 00:08:27.224 SO libspdk_ftl.so.9.0 00:08:27.224 CC lib/iscsi/init_grp.o 00:08:27.224 CC lib/iscsi/iscsi.o 00:08:27.224 CC lib/iscsi/param.o 00:08:27.224 CC lib/iscsi/portal_grp.o 00:08:27.224 CC lib/iscsi/tgt_node.o 00:08:27.224 CC lib/iscsi/iscsi_subsystem.o 00:08:27.224 CC lib/iscsi/iscsi_rpc.o 00:08:27.224 CC lib/iscsi/task.o 00:08:27.484 SYMLINK libspdk_ftl.so 00:08:27.745 LIB libspdk_nvmf.a 00:08:28.006 SO libspdk_nvmf.so.20.0 00:08:28.006 LIB libspdk_vhost.a 00:08:28.006 SO libspdk_vhost.so.8.0 00:08:28.006 SYMLINK libspdk_nvmf.so 00:08:28.006 SYMLINK libspdk_vhost.so 00:08:28.266 LIB libspdk_iscsi.a 00:08:28.266 SO libspdk_iscsi.so.8.0 00:08:28.266 SYMLINK libspdk_iscsi.so 00:08:28.837 CC module/env_dpdk/env_dpdk_rpc.o 00:08:28.838 CC module/vfu_device/vfu_virtio.o 00:08:28.838 CC module/vfu_device/vfu_virtio_blk.o 00:08:28.838 CC module/vfu_device/vfu_virtio_scsi.o 00:08:28.838 CC module/vfu_device/vfu_virtio_rpc.o 00:08:28.838 CC module/vfu_device/vfu_virtio_fs.o 00:08:29.099 CC module/keyring/file/keyring.o 00:08:29.099 CC module/keyring/file/keyring_rpc.o 00:08:29.099 LIB libspdk_env_dpdk_rpc.a 00:08:29.099 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:29.099 CC module/accel/error/accel_error.o 00:08:29.099 CC module/accel/error/accel_error_rpc.o 00:08:29.099 CC module/keyring/linux/keyring.o 00:08:29.099 CC module/keyring/linux/keyring_rpc.o 00:08:29.099 CC module/scheduler/gscheduler/gscheduler.o 00:08:29.099 CC module/blob/bdev/blob_bdev.o 00:08:29.099 CC module/sock/posix/posix.o 00:08:29.099 CC module/accel/dsa/accel_dsa.o 00:08:29.099 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:29.099 CC module/accel/ioat/accel_ioat.o 00:08:29.099 CC module/accel/iaa/accel_iaa.o 00:08:29.099 CC module/accel/iaa/accel_iaa_rpc.o 00:08:29.099 CC module/accel/dsa/accel_dsa_rpc.o 00:08:29.099 CC module/accel/ioat/accel_ioat_rpc.o 00:08:29.099 CC module/fsdev/aio/fsdev_aio.o 00:08:29.099 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:29.099 CC module/fsdev/aio/linux_aio_mgr.o 00:08:29.099 SO libspdk_env_dpdk_rpc.so.6.0 00:08:29.099 SYMLINK libspdk_env_dpdk_rpc.so 00:08:29.360 LIB libspdk_keyring_file.a 00:08:29.360 LIB libspdk_keyring_linux.a 00:08:29.360 LIB libspdk_scheduler_dpdk_governor.a 00:08:29.360 SO libspdk_keyring_file.so.2.0 00:08:29.360 LIB libspdk_scheduler_gscheduler.a 00:08:29.360 LIB libspdk_scheduler_dynamic.a 00:08:29.360 LIB libspdk_accel_error.a 00:08:29.360 SO libspdk_keyring_linux.so.1.0 00:08:29.360 LIB libspdk_accel_ioat.a 00:08:29.360 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:29.360 SO libspdk_scheduler_gscheduler.so.4.0 00:08:29.360 SO libspdk_scheduler_dynamic.so.4.0 00:08:29.360 LIB libspdk_accel_iaa.a 00:08:29.360 SO libspdk_accel_error.so.2.0 00:08:29.360 SYMLINK libspdk_keyring_file.so 00:08:29.360 SO libspdk_accel_ioat.so.6.0 00:08:29.360 SYMLINK libspdk_keyring_linux.so 00:08:29.360 SO libspdk_accel_iaa.so.3.0 00:08:29.360 SYMLINK libspdk_scheduler_gscheduler.so 00:08:29.360 LIB libspdk_blob_bdev.a 00:08:29.360 LIB libspdk_accel_dsa.a 00:08:29.360 SYMLINK libspdk_scheduler_dynamic.so 00:08:29.360 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:29.360 SYMLINK libspdk_accel_error.so 00:08:29.360 SO libspdk_blob_bdev.so.12.0 00:08:29.360 SYMLINK libspdk_accel_ioat.so 00:08:29.360 LIB libspdk_vfu_device.a 00:08:29.360 SO libspdk_accel_dsa.so.5.0 00:08:29.360 SYMLINK libspdk_accel_iaa.so 00:08:29.620 SO libspdk_vfu_device.so.3.0 00:08:29.620 SYMLINK libspdk_blob_bdev.so 00:08:29.620 SYMLINK libspdk_accel_dsa.so 00:08:29.620 SYMLINK libspdk_vfu_device.so 00:08:29.620 LIB libspdk_fsdev_aio.a 00:08:29.620 SO libspdk_fsdev_aio.so.1.0 00:08:29.620 LIB libspdk_sock_posix.a 00:08:29.881 SO libspdk_sock_posix.so.6.0 00:08:29.881 SYMLINK libspdk_fsdev_aio.so 00:08:29.881 SYMLINK libspdk_sock_posix.so 00:08:30.142 CC module/bdev/raid/bdev_raid_rpc.o 00:08:30.142 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:30.142 CC module/bdev/raid/bdev_raid_sb.o 00:08:30.142 CC module/bdev/delay/vbdev_delay.o 00:08:30.142 CC module/bdev/raid/bdev_raid.o 00:08:30.142 CC module/bdev/raid/raid0.o 00:08:30.142 CC module/bdev/raid/raid1.o 00:08:30.142 CC module/bdev/raid/concat.o 00:08:30.142 CC module/bdev/gpt/gpt.o 00:08:30.142 CC module/bdev/gpt/vbdev_gpt.o 00:08:30.142 CC module/bdev/null/bdev_null.o 00:08:30.142 CC module/bdev/null/bdev_null_rpc.o 00:08:30.142 CC module/blobfs/bdev/blobfs_bdev.o 00:08:30.142 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:30.142 CC module/bdev/lvol/vbdev_lvol.o 00:08:30.142 CC module/bdev/ftl/bdev_ftl.o 00:08:30.142 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:30.142 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:30.142 CC module/bdev/aio/bdev_aio.o 00:08:30.142 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:30.142 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:30.142 CC module/bdev/split/vbdev_split.o 00:08:30.142 CC module/bdev/error/vbdev_error.o 00:08:30.142 CC module/bdev/split/vbdev_split_rpc.o 00:08:30.142 CC module/bdev/aio/bdev_aio_rpc.o 00:08:30.142 CC module/bdev/error/vbdev_error_rpc.o 00:08:30.142 CC module/bdev/malloc/bdev_malloc.o 00:08:30.142 CC module/bdev/nvme/bdev_nvme.o 00:08:30.142 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:30.142 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:30.142 CC module/bdev/nvme/nvme_rpc.o 00:08:30.142 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:30.142 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:30.142 CC module/bdev/passthru/vbdev_passthru.o 00:08:30.142 CC module/bdev/nvme/bdev_mdns_client.o 00:08:30.142 CC module/bdev/nvme/vbdev_opal.o 00:08:30.142 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:30.142 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:30.142 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:30.142 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:30.142 CC module/bdev/iscsi/bdev_iscsi.o 00:08:30.142 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:30.402 LIB libspdk_blobfs_bdev.a 00:08:30.402 SO libspdk_blobfs_bdev.so.6.0 00:08:30.402 LIB libspdk_bdev_gpt.a 00:08:30.402 LIB libspdk_bdev_null.a 00:08:30.402 LIB libspdk_bdev_split.a 00:08:30.402 SO libspdk_bdev_gpt.so.6.0 00:08:30.402 LIB libspdk_bdev_ftl.a 00:08:30.402 SO libspdk_bdev_null.so.6.0 00:08:30.402 LIB libspdk_bdev_error.a 00:08:30.402 LIB libspdk_bdev_aio.a 00:08:30.402 SO libspdk_bdev_split.so.6.0 00:08:30.402 SYMLINK libspdk_blobfs_bdev.so 00:08:30.402 LIB libspdk_bdev_passthru.a 00:08:30.402 SO libspdk_bdev_error.so.6.0 00:08:30.402 SO libspdk_bdev_ftl.so.6.0 00:08:30.402 LIB libspdk_bdev_zone_block.a 00:08:30.402 LIB libspdk_bdev_delay.a 00:08:30.402 SYMLINK libspdk_bdev_gpt.so 00:08:30.402 SYMLINK libspdk_bdev_null.so 00:08:30.402 LIB libspdk_bdev_iscsi.a 00:08:30.402 SO libspdk_bdev_aio.so.6.0 00:08:30.402 SO libspdk_bdev_passthru.so.6.0 00:08:30.402 SYMLINK libspdk_bdev_split.so 00:08:30.402 SO libspdk_bdev_delay.so.6.0 00:08:30.402 SO libspdk_bdev_zone_block.so.6.0 00:08:30.402 SO libspdk_bdev_iscsi.so.6.0 00:08:30.402 LIB libspdk_bdev_malloc.a 00:08:30.402 SYMLINK libspdk_bdev_error.so 00:08:30.402 SYMLINK libspdk_bdev_ftl.so 00:08:30.402 SO libspdk_bdev_malloc.so.6.0 00:08:30.402 SYMLINK libspdk_bdev_passthru.so 00:08:30.402 SYMLINK libspdk_bdev_aio.so 00:08:30.402 SYMLINK libspdk_bdev_delay.so 00:08:30.402 SYMLINK libspdk_bdev_iscsi.so 00:08:30.402 SYMLINK libspdk_bdev_zone_block.so 00:08:30.402 LIB libspdk_bdev_lvol.a 00:08:30.662 SYMLINK libspdk_bdev_malloc.so 00:08:30.662 LIB libspdk_bdev_virtio.a 00:08:30.662 SO libspdk_bdev_lvol.so.6.0 00:08:30.662 SO libspdk_bdev_virtio.so.6.0 00:08:30.662 SYMLINK libspdk_bdev_lvol.so 00:08:30.662 SYMLINK libspdk_bdev_virtio.so 00:08:30.923 LIB libspdk_bdev_raid.a 00:08:30.923 SO libspdk_bdev_raid.so.6.0 00:08:30.923 SYMLINK libspdk_bdev_raid.so 00:08:31.863 LIB libspdk_bdev_nvme.a 00:08:31.863 SO libspdk_bdev_nvme.so.7.1 00:08:32.123 SYMLINK libspdk_bdev_nvme.so 00:08:33.065 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:33.065 CC module/event/subsystems/vmd/vmd.o 00:08:33.065 CC module/event/subsystems/iobuf/iobuf.o 00:08:33.065 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:33.065 CC module/event/subsystems/keyring/keyring.o 00:08:33.065 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:08:33.065 CC module/event/subsystems/sock/sock.o 00:08:33.065 CC module/event/subsystems/scheduler/scheduler.o 00:08:33.065 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:33.065 CC module/event/subsystems/fsdev/fsdev.o 00:08:33.065 LIB libspdk_event_vfu_tgt.a 00:08:33.065 LIB libspdk_event_sock.a 00:08:33.065 LIB libspdk_event_keyring.a 00:08:33.065 LIB libspdk_event_iobuf.a 00:08:33.065 LIB libspdk_event_vmd.a 00:08:33.065 LIB libspdk_event_vhost_blk.a 00:08:33.065 LIB libspdk_event_scheduler.a 00:08:33.065 LIB libspdk_event_fsdev.a 00:08:33.065 SO libspdk_event_vfu_tgt.so.3.0 00:08:33.065 SO libspdk_event_sock.so.5.0 00:08:33.065 SO libspdk_event_keyring.so.1.0 00:08:33.065 SO libspdk_event_iobuf.so.3.0 00:08:33.065 SO libspdk_event_vmd.so.6.0 00:08:33.065 SO libspdk_event_scheduler.so.4.0 00:08:33.065 SO libspdk_event_vhost_blk.so.3.0 00:08:33.065 SO libspdk_event_fsdev.so.1.0 00:08:33.065 SYMLINK libspdk_event_keyring.so 00:08:33.065 SYMLINK libspdk_event_vfu_tgt.so 00:08:33.065 SYMLINK libspdk_event_sock.so 00:08:33.065 SYMLINK libspdk_event_iobuf.so 00:08:33.065 SYMLINK libspdk_event_scheduler.so 00:08:33.065 SYMLINK libspdk_event_vmd.so 00:08:33.065 SYMLINK libspdk_event_vhost_blk.so 00:08:33.065 SYMLINK libspdk_event_fsdev.so 00:08:33.325 CC module/event/subsystems/accel/accel.o 00:08:33.586 LIB libspdk_event_accel.a 00:08:33.586 SO libspdk_event_accel.so.6.0 00:08:33.586 SYMLINK libspdk_event_accel.so 00:08:34.158 CC module/event/subsystems/bdev/bdev.o 00:08:34.158 LIB libspdk_event_bdev.a 00:08:34.158 SO libspdk_event_bdev.so.6.0 00:08:34.418 SYMLINK libspdk_event_bdev.so 00:08:34.679 CC module/event/subsystems/scsi/scsi.o 00:08:34.679 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:34.679 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:34.679 CC module/event/subsystems/ublk/ublk.o 00:08:34.679 CC module/event/subsystems/nbd/nbd.o 00:08:34.940 LIB libspdk_event_ublk.a 00:08:34.940 LIB libspdk_event_nbd.a 00:08:34.940 LIB libspdk_event_scsi.a 00:08:34.940 SO libspdk_event_nbd.so.6.0 00:08:34.940 SO libspdk_event_ublk.so.3.0 00:08:34.940 SO libspdk_event_scsi.so.6.0 00:08:34.940 LIB libspdk_event_nvmf.a 00:08:34.940 SO libspdk_event_nvmf.so.6.0 00:08:34.940 SYMLINK libspdk_event_nbd.so 00:08:34.940 SYMLINK libspdk_event_ublk.so 00:08:34.940 SYMLINK libspdk_event_scsi.so 00:08:34.940 SYMLINK libspdk_event_nvmf.so 00:08:35.200 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:35.200 CC module/event/subsystems/iscsi/iscsi.o 00:08:35.461 LIB libspdk_event_vhost_scsi.a 00:08:35.461 LIB libspdk_event_iscsi.a 00:08:35.461 SO libspdk_event_vhost_scsi.so.3.0 00:08:35.461 SO libspdk_event_iscsi.so.6.0 00:08:35.461 SYMLINK libspdk_event_vhost_scsi.so 00:08:35.461 SYMLINK libspdk_event_iscsi.so 00:08:35.722 SO libspdk.so.6.0 00:08:35.722 SYMLINK libspdk.so 00:08:36.309 CC app/trace_record/trace_record.o 00:08:36.309 CXX app/trace/trace.o 00:08:36.309 CC app/spdk_lspci/spdk_lspci.o 00:08:36.309 CC test/rpc_client/rpc_client_test.o 00:08:36.309 CC app/spdk_nvme_identify/identify.o 00:08:36.309 CC app/spdk_top/spdk_top.o 00:08:36.309 CC app/spdk_nvme_discover/discovery_aer.o 00:08:36.309 TEST_HEADER include/spdk/accel.h 00:08:36.309 TEST_HEADER include/spdk/accel_module.h 00:08:36.309 TEST_HEADER include/spdk/assert.h 00:08:36.309 TEST_HEADER include/spdk/barrier.h 00:08:36.309 CC app/spdk_nvme_perf/perf.o 00:08:36.309 TEST_HEADER include/spdk/base64.h 00:08:36.309 TEST_HEADER include/spdk/bdev.h 00:08:36.309 TEST_HEADER include/spdk/bdev_module.h 00:08:36.309 TEST_HEADER include/spdk/bdev_zone.h 00:08:36.309 TEST_HEADER include/spdk/bit_array.h 00:08:36.309 TEST_HEADER include/spdk/bit_pool.h 00:08:36.309 TEST_HEADER include/spdk/blob_bdev.h 00:08:36.309 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:36.309 TEST_HEADER include/spdk/blobfs.h 00:08:36.309 CC app/iscsi_tgt/iscsi_tgt.o 00:08:36.309 TEST_HEADER include/spdk/blob.h 00:08:36.309 TEST_HEADER include/spdk/config.h 00:08:36.309 TEST_HEADER include/spdk/conf.h 00:08:36.309 TEST_HEADER include/spdk/cpuset.h 00:08:36.309 TEST_HEADER include/spdk/crc16.h 00:08:36.309 TEST_HEADER include/spdk/crc32.h 00:08:36.309 TEST_HEADER include/spdk/crc64.h 00:08:36.309 CC app/spdk_dd/spdk_dd.o 00:08:36.309 TEST_HEADER include/spdk/dif.h 00:08:36.309 TEST_HEADER include/spdk/dma.h 00:08:36.309 TEST_HEADER include/spdk/env.h 00:08:36.309 TEST_HEADER include/spdk/endian.h 00:08:36.309 TEST_HEADER include/spdk/event.h 00:08:36.309 TEST_HEADER include/spdk/env_dpdk.h 00:08:36.309 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:36.309 TEST_HEADER include/spdk/fd_group.h 00:08:36.309 TEST_HEADER include/spdk/fd.h 00:08:36.309 TEST_HEADER include/spdk/fsdev.h 00:08:36.309 TEST_HEADER include/spdk/ftl.h 00:08:36.309 TEST_HEADER include/spdk/fsdev_module.h 00:08:36.309 TEST_HEADER include/spdk/gpt_spec.h 00:08:36.309 TEST_HEADER include/spdk/file.h 00:08:36.309 TEST_HEADER include/spdk/hexlify.h 00:08:36.309 CC app/nvmf_tgt/nvmf_main.o 00:08:36.309 TEST_HEADER include/spdk/idxd.h 00:08:36.309 TEST_HEADER include/spdk/histogram_data.h 00:08:36.309 TEST_HEADER include/spdk/init.h 00:08:36.309 TEST_HEADER include/spdk/idxd_spec.h 00:08:36.309 TEST_HEADER include/spdk/ioat.h 00:08:36.309 TEST_HEADER include/spdk/ioat_spec.h 00:08:36.309 TEST_HEADER include/spdk/iscsi_spec.h 00:08:36.309 TEST_HEADER include/spdk/json.h 00:08:36.309 TEST_HEADER include/spdk/keyring.h 00:08:36.309 TEST_HEADER include/spdk/jsonrpc.h 00:08:36.309 TEST_HEADER include/spdk/keyring_module.h 00:08:36.309 TEST_HEADER include/spdk/log.h 00:08:36.309 TEST_HEADER include/spdk/likely.h 00:08:36.309 TEST_HEADER include/spdk/lvol.h 00:08:36.309 TEST_HEADER include/spdk/md5.h 00:08:36.309 TEST_HEADER include/spdk/memory.h 00:08:36.309 TEST_HEADER include/spdk/nbd.h 00:08:36.309 TEST_HEADER include/spdk/mmio.h 00:08:36.309 TEST_HEADER include/spdk/net.h 00:08:36.309 TEST_HEADER include/spdk/nvme.h 00:08:36.309 TEST_HEADER include/spdk/notify.h 00:08:36.309 TEST_HEADER include/spdk/nvme_intel.h 00:08:36.309 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:36.309 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:36.309 CC app/spdk_tgt/spdk_tgt.o 00:08:36.309 TEST_HEADER include/spdk/nvme_spec.h 00:08:36.309 TEST_HEADER include/spdk/nvme_zns.h 00:08:36.309 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:36.309 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:36.309 TEST_HEADER include/spdk/nvmf_transport.h 00:08:36.309 TEST_HEADER include/spdk/nvmf.h 00:08:36.309 TEST_HEADER include/spdk/nvmf_spec.h 00:08:36.309 TEST_HEADER include/spdk/opal.h 00:08:36.309 TEST_HEADER include/spdk/opal_spec.h 00:08:36.309 TEST_HEADER include/spdk/pci_ids.h 00:08:36.309 TEST_HEADER include/spdk/queue.h 00:08:36.309 TEST_HEADER include/spdk/pipe.h 00:08:36.309 TEST_HEADER include/spdk/rpc.h 00:08:36.309 TEST_HEADER include/spdk/reduce.h 00:08:36.309 TEST_HEADER include/spdk/scheduler.h 00:08:36.309 TEST_HEADER include/spdk/scsi.h 00:08:36.309 TEST_HEADER include/spdk/sock.h 00:08:36.309 TEST_HEADER include/spdk/scsi_spec.h 00:08:36.309 TEST_HEADER include/spdk/stdinc.h 00:08:36.309 TEST_HEADER include/spdk/string.h 00:08:36.309 TEST_HEADER include/spdk/thread.h 00:08:36.309 TEST_HEADER include/spdk/trace_parser.h 00:08:36.309 TEST_HEADER include/spdk/trace.h 00:08:36.309 TEST_HEADER include/spdk/tree.h 00:08:36.309 TEST_HEADER include/spdk/util.h 00:08:36.309 TEST_HEADER include/spdk/uuid.h 00:08:36.309 TEST_HEADER include/spdk/ublk.h 00:08:36.309 TEST_HEADER include/spdk/version.h 00:08:36.309 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:36.309 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:36.309 TEST_HEADER include/spdk/xor.h 00:08:36.309 TEST_HEADER include/spdk/vhost.h 00:08:36.310 TEST_HEADER include/spdk/zipf.h 00:08:36.310 TEST_HEADER include/spdk/vmd.h 00:08:36.310 CXX test/cpp_headers/accel.o 00:08:36.310 CXX test/cpp_headers/accel_module.o 00:08:36.310 CXX test/cpp_headers/barrier.o 00:08:36.310 CXX test/cpp_headers/assert.o 00:08:36.310 CXX test/cpp_headers/base64.o 00:08:36.310 CXX test/cpp_headers/bdev.o 00:08:36.310 CXX test/cpp_headers/bdev_zone.o 00:08:36.310 CXX test/cpp_headers/bdev_module.o 00:08:36.310 CXX test/cpp_headers/blobfs_bdev.o 00:08:36.310 CXX test/cpp_headers/bit_array.o 00:08:36.310 CXX test/cpp_headers/bit_pool.o 00:08:36.310 CXX test/cpp_headers/blob_bdev.o 00:08:36.310 CXX test/cpp_headers/blob.o 00:08:36.310 CXX test/cpp_headers/blobfs.o 00:08:36.310 CXX test/cpp_headers/cpuset.o 00:08:36.310 CXX test/cpp_headers/conf.o 00:08:36.310 CXX test/cpp_headers/config.o 00:08:36.310 CXX test/cpp_headers/crc16.o 00:08:36.310 CXX test/cpp_headers/crc32.o 00:08:36.310 CXX test/cpp_headers/crc64.o 00:08:36.310 CXX test/cpp_headers/endian.o 00:08:36.310 CXX test/cpp_headers/dif.o 00:08:36.310 CXX test/cpp_headers/event.o 00:08:36.310 CXX test/cpp_headers/env_dpdk.o 00:08:36.310 CXX test/cpp_headers/dma.o 00:08:36.310 CXX test/cpp_headers/fd_group.o 00:08:36.310 CXX test/cpp_headers/env.o 00:08:36.310 CXX test/cpp_headers/fd.o 00:08:36.310 CXX test/cpp_headers/fsdev.o 00:08:36.310 CXX test/cpp_headers/fsdev_module.o 00:08:36.310 CXX test/cpp_headers/file.o 00:08:36.310 CXX test/cpp_headers/ftl.o 00:08:36.310 CXX test/cpp_headers/histogram_data.o 00:08:36.310 CXX test/cpp_headers/gpt_spec.o 00:08:36.310 CXX test/cpp_headers/hexlify.o 00:08:36.310 CXX test/cpp_headers/init.o 00:08:36.310 CXX test/cpp_headers/idxd.o 00:08:36.310 CXX test/cpp_headers/idxd_spec.o 00:08:36.310 CXX test/cpp_headers/ioat.o 00:08:36.310 CXX test/cpp_headers/iscsi_spec.o 00:08:36.310 CXX test/cpp_headers/ioat_spec.o 00:08:36.310 CXX test/cpp_headers/json.o 00:08:36.310 CXX test/cpp_headers/keyring.o 00:08:36.310 CXX test/cpp_headers/jsonrpc.o 00:08:36.310 CXX test/cpp_headers/keyring_module.o 00:08:36.310 CXX test/cpp_headers/likely.o 00:08:36.310 CXX test/cpp_headers/log.o 00:08:36.310 CXX test/cpp_headers/lvol.o 00:08:36.310 CXX test/cpp_headers/md5.o 00:08:36.310 CXX test/cpp_headers/memory.o 00:08:36.310 CXX test/cpp_headers/mmio.o 00:08:36.310 CXX test/cpp_headers/nbd.o 00:08:36.310 CXX test/cpp_headers/nvme.o 00:08:36.310 CXX test/cpp_headers/net.o 00:08:36.310 CXX test/cpp_headers/nvme_intel.o 00:08:36.310 CXX test/cpp_headers/notify.o 00:08:36.310 CXX test/cpp_headers/nvme_ocssd.o 00:08:36.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:36.310 CXX test/cpp_headers/nvme_spec.o 00:08:36.310 CXX test/cpp_headers/nvme_zns.o 00:08:36.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:36.310 CXX test/cpp_headers/nvmf_cmd.o 00:08:36.310 CXX test/cpp_headers/nvmf_transport.o 00:08:36.310 CXX test/cpp_headers/nvmf.o 00:08:36.310 CXX test/cpp_headers/nvmf_spec.o 00:08:36.310 CXX test/cpp_headers/opal.o 00:08:36.310 CXX test/cpp_headers/pci_ids.o 00:08:36.310 CXX test/cpp_headers/opal_spec.o 00:08:36.310 CXX test/cpp_headers/pipe.o 00:08:36.310 CXX test/cpp_headers/reduce.o 00:08:36.310 CXX test/cpp_headers/queue.o 00:08:36.310 CXX test/cpp_headers/rpc.o 00:08:36.310 CXX test/cpp_headers/scsi.o 00:08:36.310 CXX test/cpp_headers/scheduler.o 00:08:36.310 CXX test/cpp_headers/scsi_spec.o 00:08:36.310 CXX test/cpp_headers/stdinc.o 00:08:36.310 CXX test/cpp_headers/sock.o 00:08:36.310 CXX test/cpp_headers/string.o 00:08:36.310 CXX test/cpp_headers/thread.o 00:08:36.310 CXX test/cpp_headers/trace.o 00:08:36.310 CXX test/cpp_headers/trace_parser.o 00:08:36.310 CXX test/cpp_headers/tree.o 00:08:36.310 CXX test/cpp_headers/ublk.o 00:08:36.310 CC test/app/stub/stub.o 00:08:36.310 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:36.310 CC app/fio/nvme/fio_plugin.o 00:08:36.310 CC test/env/vtophys/vtophys.o 00:08:36.310 CC test/thread/poller_perf/poller_perf.o 00:08:36.310 CC test/env/pci/pci_ut.o 00:08:36.310 CC examples/ioat/perf/perf.o 00:08:36.602 CC test/app/histogram_perf/histogram_perf.o 00:08:36.602 CC test/app/jsoncat/jsoncat.o 00:08:36.602 CC examples/ioat/verify/verify.o 00:08:36.602 CC examples/util/zipf/zipf.o 00:08:36.602 CC test/env/memory/memory_ut.o 00:08:36.602 CXX test/cpp_headers/util.o 00:08:36.602 CC test/dma/test_dma/test_dma.o 00:08:36.602 CC app/fio/bdev/fio_plugin.o 00:08:36.602 LINK spdk_lspci 00:08:36.602 CC test/app/bdev_svc/bdev_svc.o 00:08:36.892 LINK interrupt_tgt 00:08:36.892 LINK rpc_client_test 00:08:36.892 LINK iscsi_tgt 00:08:36.892 LINK nvmf_tgt 00:08:36.892 LINK spdk_nvme_discover 00:08:36.892 LINK spdk_trace_record 00:08:36.892 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:37.155 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:37.155 CC test/env/mem_callbacks/mem_callbacks.o 00:08:37.155 LINK jsoncat 00:08:37.155 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:37.155 LINK vtophys 00:08:37.155 LINK poller_perf 00:08:37.155 LINK histogram_perf 00:08:37.155 CXX test/cpp_headers/uuid.o 00:08:37.155 LINK spdk_tgt 00:08:37.155 CXX test/cpp_headers/version.o 00:08:37.155 CXX test/cpp_headers/vfio_user_pci.o 00:08:37.155 LINK env_dpdk_post_init 00:08:37.155 CXX test/cpp_headers/vfio_user_spec.o 00:08:37.155 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:37.155 CXX test/cpp_headers/vhost.o 00:08:37.155 CXX test/cpp_headers/vmd.o 00:08:37.155 CXX test/cpp_headers/xor.o 00:08:37.155 CXX test/cpp_headers/zipf.o 00:08:37.155 LINK zipf 00:08:37.155 LINK stub 00:08:37.155 LINK spdk_dd 00:08:37.155 LINK ioat_perf 00:08:37.155 LINK verify 00:08:37.155 LINK bdev_svc 00:08:37.414 LINK spdk_trace 00:08:37.414 LINK pci_ut 00:08:37.414 LINK nvme_fuzz 00:08:37.414 LINK test_dma 00:08:37.414 LINK vhost_fuzz 00:08:37.675 LINK spdk_nvme_perf 00:08:37.675 LINK spdk_nvme_identify 00:08:37.675 LINK spdk_bdev 00:08:37.675 CC examples/sock/hello_world/hello_sock.o 00:08:37.675 CC test/event/reactor/reactor.o 00:08:37.675 CC test/event/event_perf/event_perf.o 00:08:37.675 LINK mem_callbacks 00:08:37.675 CC examples/vmd/led/led.o 00:08:37.675 CC test/event/reactor_perf/reactor_perf.o 00:08:37.675 CC examples/vmd/lsvmd/lsvmd.o 00:08:37.675 CC examples/idxd/perf/perf.o 00:08:37.675 CC test/event/scheduler/scheduler.o 00:08:37.675 LINK spdk_nvme 00:08:37.675 CC app/vhost/vhost.o 00:08:37.675 CC test/event/app_repeat/app_repeat.o 00:08:37.675 LINK spdk_top 00:08:37.675 CC examples/thread/thread/thread_ex.o 00:08:37.675 LINK event_perf 00:08:37.935 LINK reactor 00:08:37.935 LINK lsvmd 00:08:37.935 LINK reactor_perf 00:08:37.935 LINK led 00:08:37.935 LINK app_repeat 00:08:37.935 LINK hello_sock 00:08:37.935 LINK vhost 00:08:37.935 LINK scheduler 00:08:37.935 LINK thread 00:08:37.935 LINK idxd_perf 00:08:37.935 LINK memory_ut 00:08:37.935 CC test/nvme/err_injection/err_injection.o 00:08:37.935 CC test/nvme/connect_stress/connect_stress.o 00:08:37.935 CC test/nvme/overhead/overhead.o 00:08:37.935 CC test/nvme/compliance/nvme_compliance.o 00:08:37.935 CC test/nvme/simple_copy/simple_copy.o 00:08:37.935 CC test/nvme/reserve/reserve.o 00:08:37.935 CC test/nvme/aer/aer.o 00:08:37.935 CC test/nvme/e2edp/nvme_dp.o 00:08:37.935 CC test/nvme/cuse/cuse.o 00:08:37.935 CC test/nvme/reset/reset.o 00:08:37.935 CC test/nvme/fused_ordering/fused_ordering.o 00:08:37.935 CC test/nvme/fdp/fdp.o 00:08:37.935 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:37.935 CC test/nvme/sgl/sgl.o 00:08:37.935 CC test/nvme/startup/startup.o 00:08:37.935 CC test/accel/dif/dif.o 00:08:37.935 CC test/nvme/boot_partition/boot_partition.o 00:08:38.194 CC test/blobfs/mkfs/mkfs.o 00:08:38.194 CC test/lvol/esnap/esnap.o 00:08:38.194 LINK err_injection 00:08:38.194 LINK connect_stress 00:08:38.194 LINK startup 00:08:38.194 LINK simple_copy 00:08:38.194 LINK reserve 00:08:38.194 LINK fused_ordering 00:08:38.194 LINK boot_partition 00:08:38.194 LINK doorbell_aers 00:08:38.194 LINK reset 00:08:38.194 LINK nvme_dp 00:08:38.194 LINK sgl 00:08:38.194 LINK overhead 00:08:38.194 LINK mkfs 00:08:38.454 LINK aer 00:08:38.454 LINK nvme_compliance 00:08:38.454 LINK fdp 00:08:38.454 CC examples/nvme/abort/abort.o 00:08:38.454 CC examples/nvme/reconnect/reconnect.o 00:08:38.454 LINK iscsi_fuzz 00:08:38.454 CC examples/nvme/hello_world/hello_world.o 00:08:38.454 CC examples/nvme/arbitration/arbitration.o 00:08:38.454 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:38.454 CC examples/nvme/hotplug/hotplug.o 00:08:38.454 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:38.454 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:38.454 CC examples/accel/perf/accel_perf.o 00:08:38.454 CC examples/blob/hello_world/hello_blob.o 00:08:38.454 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:38.454 CC examples/blob/cli/blobcli.o 00:08:38.714 LINK pmr_persistence 00:08:38.714 LINK cmb_copy 00:08:38.714 LINK hello_world 00:08:38.714 LINK dif 00:08:38.714 LINK hotplug 00:08:38.714 LINK reconnect 00:08:38.714 LINK arbitration 00:08:38.714 LINK abort 00:08:38.714 LINK hello_blob 00:08:38.714 LINK nvme_manage 00:08:38.714 LINK hello_fsdev 00:08:38.974 LINK accel_perf 00:08:38.974 LINK blobcli 00:08:38.974 LINK cuse 00:08:39.234 CC test/bdev/bdevio/bdevio.o 00:08:39.494 CC examples/bdev/hello_world/hello_bdev.o 00:08:39.494 CC examples/bdev/bdevperf/bdevperf.o 00:08:39.494 LINK bdevio 00:08:39.755 LINK hello_bdev 00:08:40.015 LINK bdevperf 00:08:40.584 CC examples/nvmf/nvmf/nvmf.o 00:08:40.845 LINK nvmf 00:08:41.787 LINK esnap 00:08:42.047 00:08:42.047 real 0m55.760s 00:08:42.047 user 7m51.420s 00:08:42.047 sys 4m20.072s 00:08:42.047 23:50:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:42.047 23:50:26 make -- common/autotest_common.sh@10 -- $ set +x 00:08:42.047 ************************************ 00:08:42.047 END TEST make 00:08:42.047 ************************************ 00:08:42.047 23:50:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:42.047 23:50:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:42.047 23:50:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:42.047 23:50:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.047 23:50:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:42.047 23:50:26 -- pm/common@44 -- $ pid=129871 00:08:42.047 23:50:26 -- pm/common@50 -- $ kill -TERM 129871 00:08:42.047 23:50:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.047 23:50:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:42.047 23:50:26 -- pm/common@44 -- $ pid=129873 00:08:42.047 23:50:26 -- pm/common@50 -- $ kill -TERM 129873 00:08:42.047 23:50:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.047 23:50:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:42.047 23:50:26 -- pm/common@44 -- $ pid=129875 00:08:42.047 23:50:26 -- pm/common@50 -- $ kill -TERM 129875 00:08:42.047 23:50:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.047 23:50:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:42.047 23:50:26 -- pm/common@44 -- $ pid=129897 00:08:42.047 23:50:26 -- pm/common@50 -- $ sudo -E kill -TERM 129897 00:08:42.047 23:50:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:42.047 23:50:26 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:08:42.305 23:50:26 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.305 23:50:26 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.305 23:50:26 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.305 23:50:26 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.305 23:50:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.305 23:50:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.305 23:50:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.305 23:50:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.305 23:50:26 -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.305 23:50:26 -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.305 23:50:26 -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.305 23:50:26 -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.305 23:50:26 -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.305 23:50:26 -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.305 23:50:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.305 23:50:26 -- scripts/common.sh@344 -- # case "$op" in 00:08:42.305 23:50:26 -- scripts/common.sh@345 -- # : 1 00:08:42.305 23:50:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.305 23:50:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.305 23:50:26 -- scripts/common.sh@365 -- # decimal 1 00:08:42.305 23:50:26 -- scripts/common.sh@353 -- # local d=1 00:08:42.305 23:50:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.305 23:50:26 -- scripts/common.sh@355 -- # echo 1 00:08:42.305 23:50:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.305 23:50:26 -- scripts/common.sh@366 -- # decimal 2 00:08:42.305 23:50:26 -- scripts/common.sh@353 -- # local d=2 00:08:42.305 23:50:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.305 23:50:26 -- scripts/common.sh@355 -- # echo 2 00:08:42.305 23:50:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.305 23:50:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.305 23:50:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.305 23:50:26 -- scripts/common.sh@368 -- # return 0 00:08:42.305 23:50:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.305 23:50:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.305 --rc genhtml_branch_coverage=1 00:08:42.305 --rc genhtml_function_coverage=1 00:08:42.305 --rc genhtml_legend=1 00:08:42.305 --rc geninfo_all_blocks=1 00:08:42.305 --rc geninfo_unexecuted_blocks=1 00:08:42.305 00:08:42.305 ' 00:08:42.305 23:50:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.306 --rc genhtml_branch_coverage=1 00:08:42.306 --rc genhtml_function_coverage=1 00:08:42.306 --rc genhtml_legend=1 00:08:42.306 --rc geninfo_all_blocks=1 00:08:42.306 --rc geninfo_unexecuted_blocks=1 00:08:42.306 00:08:42.306 ' 00:08:42.306 23:50:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.306 --rc genhtml_branch_coverage=1 00:08:42.306 --rc genhtml_function_coverage=1 00:08:42.306 --rc genhtml_legend=1 00:08:42.306 --rc geninfo_all_blocks=1 00:08:42.306 --rc geninfo_unexecuted_blocks=1 00:08:42.306 00:08:42.306 ' 00:08:42.306 23:50:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.306 --rc genhtml_branch_coverage=1 00:08:42.306 --rc genhtml_function_coverage=1 00:08:42.306 --rc genhtml_legend=1 00:08:42.306 --rc geninfo_all_blocks=1 00:08:42.306 --rc geninfo_unexecuted_blocks=1 00:08:42.306 00:08:42.306 ' 00:08:42.306 23:50:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.306 23:50:26 -- nvmf/common.sh@7 -- # uname -s 00:08:42.306 23:50:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.306 23:50:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.306 23:50:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.306 23:50:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.306 23:50:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.306 23:50:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.306 23:50:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.306 23:50:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.306 23:50:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.306 23:50:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.306 23:50:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:42.306 23:50:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:42.306 23:50:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.306 23:50:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.306 23:50:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.306 23:50:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.306 23:50:26 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.306 23:50:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.306 23:50:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.306 23:50:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.306 23:50:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.306 23:50:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.306 23:50:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.306 23:50:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.306 23:50:26 -- paths/export.sh@5 -- # export PATH 00:08:42.306 23:50:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.306 23:50:26 -- nvmf/common.sh@51 -- # : 0 00:08:42.306 23:50:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.306 23:50:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.306 23:50:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.306 23:50:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.306 23:50:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.306 23:50:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.306 23:50:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.306 23:50:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.306 23:50:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.306 23:50:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:42.306 23:50:26 -- spdk/autotest.sh@32 -- # uname -s 00:08:42.306 23:50:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:42.306 23:50:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:42.306 23:50:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:42.306 23:50:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:08:42.306 23:50:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:42.306 23:50:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:42.566 23:50:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:42.566 23:50:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:42.566 23:50:26 -- spdk/autotest.sh@48 -- # udevadm_pid=193915 00:08:42.566 23:50:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:42.566 23:50:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:42.566 23:50:26 -- pm/common@17 -- # local monitor 00:08:42.566 23:50:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.566 23:50:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.566 23:50:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.566 23:50:26 -- pm/common@21 -- # date +%s 00:08:42.566 23:50:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:42.566 23:50:26 -- pm/common@21 -- # date +%s 00:08:42.566 23:50:26 -- pm/common@25 -- # sleep 1 00:08:42.566 23:50:26 -- pm/common@21 -- # date +%s 00:08:42.566 23:50:26 -- pm/common@21 -- # date +%s 00:08:42.566 23:50:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784626 00:08:42.566 23:50:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784626 00:08:42.566 23:50:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784626 00:08:42.566 23:50:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784626 00:08:42.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784626_collect-vmstat.pm.log 00:08:42.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784626_collect-cpu-load.pm.log 00:08:42.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784626_collect-cpu-temp.pm.log 00:08:42.566 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784626_collect-bmc-pm.bmc.pm.log 00:08:43.508 23:50:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:43.508 23:50:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:43.508 23:50:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.508 23:50:27 -- common/autotest_common.sh@10 -- # set +x 00:08:43.508 23:50:27 -- spdk/autotest.sh@59 -- # create_test_list 00:08:43.508 23:50:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:43.508 23:50:27 -- common/autotest_common.sh@10 -- # set +x 00:08:43.508 23:50:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:08:43.508 23:50:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:43.508 23:50:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:43.508 23:50:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:08:43.508 23:50:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:43.508 23:50:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:43.508 23:50:27 -- common/autotest_common.sh@1457 -- # uname 00:08:43.508 23:50:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:43.508 23:50:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:43.508 23:50:27 -- common/autotest_common.sh@1477 -- # uname 00:08:43.508 23:50:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:43.508 23:50:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:43.508 23:50:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:43.508 lcov: LCOV version 1.15 00:08:43.508 23:50:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:08:55.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:55.731 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:10.637 23:50:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:10.637 23:50:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.637 23:50:53 -- common/autotest_common.sh@10 -- # set +x 00:09:10.637 23:50:53 -- spdk/autotest.sh@78 -- # rm -f 00:09:10.637 23:50:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:12.550 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:09:12.550 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:09:12.810 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:09:12.810 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:09:12.810 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:09:12.810 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:09:12.810 23:50:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:12.810 23:50:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:12.810 23:50:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:12.810 23:50:57 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:12.810 23:50:57 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:12.810 23:50:57 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:12.810 23:50:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:12.810 23:50:57 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:09:12.810 23:50:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:12.810 23:50:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:12.810 23:50:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:12.810 23:50:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:12.810 23:50:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:12.810 23:50:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:12.810 23:50:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:12.810 23:50:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:12.810 23:50:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:12.811 23:50:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:12.811 23:50:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:12.811 No valid GPT data, bailing 00:09:12.811 23:50:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:12.811 23:50:57 -- scripts/common.sh@394 -- # pt= 00:09:12.811 23:50:57 -- scripts/common.sh@395 -- # return 1 00:09:12.811 23:50:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:12.811 1+0 records in 00:09:12.811 1+0 records out 00:09:12.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494423 s, 212 MB/s 00:09:12.811 23:50:57 -- spdk/autotest.sh@105 -- # sync 00:09:12.811 23:50:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:12.811 23:50:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:12.811 23:50:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:20.955 23:51:04 -- spdk/autotest.sh@111 -- # uname -s 00:09:20.955 23:51:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:20.955 23:51:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:20.955 23:51:04 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:23.498 Hugepages 00:09:23.498 node hugesize free / total 00:09:23.498 node0 1048576kB 0 / 0 00:09:23.498 node0 2048kB 0 / 0 00:09:23.498 node1 1048576kB 0 / 0 00:09:23.498 node1 2048kB 0 / 0 00:09:23.498 00:09:23.498 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:23.498 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:23.498 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:23.759 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:23.759 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:09:23.759 23:51:08 -- spdk/autotest.sh@117 -- # uname -s 00:09:23.759 23:51:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:23.759 23:51:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:23.759 23:51:08 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:27.058 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:27.318 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:29.230 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:09:29.230 23:51:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:30.172 23:51:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:30.172 23:51:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:30.172 23:51:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:30.172 23:51:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:30.172 23:51:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:30.172 23:51:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:30.172 23:51:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:30.172 23:51:14 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:30.172 23:51:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:30.172 23:51:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:30.172 23:51:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:09:30.172 23:51:14 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:33.471 Waiting for block devices as requested 00:09:33.471 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:33.731 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:33.731 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:33.731 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:33.991 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:33.991 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:33.991 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:34.252 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:34.252 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:34.252 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:34.513 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:34.513 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:34.513 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:34.773 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:34.773 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:34.773 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:35.034 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:09:35.034 23:51:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:35.034 23:51:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:09:35.034 23:51:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:09:35.034 23:51:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:35.034 23:51:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:35.034 23:51:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:35.034 23:51:19 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:09:35.034 23:51:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:35.034 23:51:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:35.034 23:51:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:35.034 23:51:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:35.034 23:51:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:35.034 23:51:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:35.034 23:51:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:35.034 23:51:19 -- common/autotest_common.sh@1543 -- # continue 00:09:35.034 23:51:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:35.034 23:51:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.034 23:51:19 -- common/autotest_common.sh@10 -- # set +x 00:09:35.294 23:51:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:35.294 23:51:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.294 23:51:19 -- common/autotest_common.sh@10 -- # set +x 00:09:35.294 23:51:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:38.594 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:38.594 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:40.520 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:09:40.520 23:51:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:40.520 23:51:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.520 23:51:24 -- common/autotest_common.sh@10 -- # set +x 00:09:40.520 23:51:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:40.520 23:51:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:40.520 23:51:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:40.520 23:51:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:40.520 23:51:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:40.520 23:51:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:40.520 23:51:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:40.520 23:51:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:40.520 23:51:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:40.520 23:51:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:40.520 23:51:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:40.520 23:51:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:40.520 23:51:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:40.520 23:51:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:40.520 23:51:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:09:40.520 23:51:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:40.520 23:51:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:09:40.520 23:51:24 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:09:40.520 23:51:24 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:09:40.520 23:51:24 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:09:40.520 23:51:24 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:09:40.520 23:51:24 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:09:40.520 23:51:24 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:09:40.520 23:51:24 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=210182 00:09:40.520 23:51:24 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:40.520 23:51:24 -- common/autotest_common.sh@1585 -- # waitforlisten 210182 00:09:40.520 23:51:24 -- common/autotest_common.sh@835 -- # '[' -z 210182 ']' 00:09:40.520 23:51:24 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.520 23:51:24 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.521 23:51:24 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.521 23:51:24 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.521 23:51:24 -- common/autotest_common.sh@10 -- # set +x 00:09:40.521 [2024-12-09 23:51:24.900311] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:09:40.521 [2024-12-09 23:51:24.900359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210182 ] 00:09:40.521 [2024-12-09 23:51:24.989730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.783 [2024-12-09 23:51:25.032708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.352 23:51:25 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.352 23:51:25 -- common/autotest_common.sh@868 -- # return 0 00:09:41.352 23:51:25 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:09:41.352 23:51:25 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:09:41.352 23:51:25 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:09:44.646 nvme0n1 00:09:44.646 23:51:28 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:09:44.646 [2024-12-09 23:51:28.919944] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:09:44.646 request: 00:09:44.646 { 00:09:44.646 "nvme_ctrlr_name": "nvme0", 00:09:44.646 "password": "test", 00:09:44.646 "method": "bdev_nvme_opal_revert", 00:09:44.646 "req_id": 1 00:09:44.646 } 00:09:44.646 Got JSON-RPC error response 00:09:44.646 response: 00:09:44.646 { 00:09:44.646 "code": -32602, 00:09:44.646 "message": "Invalid parameters" 00:09:44.646 } 00:09:44.646 23:51:28 -- common/autotest_common.sh@1591 -- # true 00:09:44.646 23:51:28 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:09:44.646 23:51:28 -- common/autotest_common.sh@1595 -- # killprocess 210182 00:09:44.646 23:51:28 -- common/autotest_common.sh@954 -- # '[' -z 210182 ']' 00:09:44.646 23:51:28 -- common/autotest_common.sh@958 -- # kill -0 210182 00:09:44.646 23:51:28 -- common/autotest_common.sh@959 -- # uname 00:09:44.646 23:51:28 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.646 23:51:28 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210182 00:09:44.646 23:51:28 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.646 23:51:28 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.646 23:51:28 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210182' 00:09:44.646 killing process with pid 210182 00:09:44.646 23:51:28 -- common/autotest_common.sh@973 -- # kill 210182 00:09:44.646 23:51:28 -- common/autotest_common.sh@978 -- # wait 210182 00:09:47.187 23:51:31 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:47.187 23:51:31 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:47.187 23:51:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:47.187 23:51:31 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:47.187 23:51:31 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:47.187 23:51:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.187 23:51:31 -- common/autotest_common.sh@10 -- # set +x 00:09:47.187 23:51:31 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:47.187 23:51:31 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:47.187 23:51:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.187 23:51:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.187 23:51:31 -- common/autotest_common.sh@10 -- # set +x 00:09:47.187 ************************************ 00:09:47.187 START TEST env 00:09:47.187 ************************************ 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:47.187 * Looking for test storage... 00:09:47.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.187 23:51:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.187 23:51:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.187 23:51:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.187 23:51:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.187 23:51:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.187 23:51:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.187 23:51:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.187 23:51:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.187 23:51:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.187 23:51:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.187 23:51:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.187 23:51:31 env -- scripts/common.sh@344 -- # case "$op" in 00:09:47.187 23:51:31 env -- scripts/common.sh@345 -- # : 1 00:09:47.187 23:51:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.187 23:51:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.187 23:51:31 env -- scripts/common.sh@365 -- # decimal 1 00:09:47.187 23:51:31 env -- scripts/common.sh@353 -- # local d=1 00:09:47.187 23:51:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.187 23:51:31 env -- scripts/common.sh@355 -- # echo 1 00:09:47.187 23:51:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.187 23:51:31 env -- scripts/common.sh@366 -- # decimal 2 00:09:47.187 23:51:31 env -- scripts/common.sh@353 -- # local d=2 00:09:47.187 23:51:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.187 23:51:31 env -- scripts/common.sh@355 -- # echo 2 00:09:47.187 23:51:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.187 23:51:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.187 23:51:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.187 23:51:31 env -- scripts/common.sh@368 -- # return 0 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.187 --rc genhtml_branch_coverage=1 00:09:47.187 --rc genhtml_function_coverage=1 00:09:47.187 --rc genhtml_legend=1 00:09:47.187 --rc geninfo_all_blocks=1 00:09:47.187 --rc geninfo_unexecuted_blocks=1 00:09:47.187 00:09:47.187 ' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.187 --rc genhtml_branch_coverage=1 00:09:47.187 --rc genhtml_function_coverage=1 00:09:47.187 --rc genhtml_legend=1 00:09:47.187 --rc geninfo_all_blocks=1 00:09:47.187 --rc geninfo_unexecuted_blocks=1 00:09:47.187 00:09:47.187 ' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.187 --rc genhtml_branch_coverage=1 00:09:47.187 --rc genhtml_function_coverage=1 00:09:47.187 --rc genhtml_legend=1 00:09:47.187 --rc geninfo_all_blocks=1 00:09:47.187 --rc geninfo_unexecuted_blocks=1 00:09:47.187 00:09:47.187 ' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.187 --rc genhtml_branch_coverage=1 00:09:47.187 --rc genhtml_function_coverage=1 00:09:47.187 --rc genhtml_legend=1 00:09:47.187 --rc geninfo_all_blocks=1 00:09:47.187 --rc geninfo_unexecuted_blocks=1 00:09:47.187 00:09:47.187 ' 00:09:47.187 23:51:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.187 23:51:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.187 ************************************ 00:09:47.187 START TEST env_memory 00:09:47.187 ************************************ 00:09:47.187 23:51:31 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:47.187 00:09:47.187 00:09:47.187 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.187 http://cunit.sourceforge.net/ 00:09:47.187 00:09:47.187 00:09:47.187 Suite: memory 00:09:47.187 Test: alloc and free memory map ...[2024-12-09 23:51:31.399489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:47.187 passed 00:09:47.187 Test: mem map translation ...[2024-12-09 23:51:31.418565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:47.187 [2024-12-09 23:51:31.418579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:47.187 [2024-12-09 23:51:31.418615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:47.187 [2024-12-09 23:51:31.418624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:47.187 passed 00:09:47.187 Test: mem map registration ...[2024-12-09 23:51:31.454075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:47.187 [2024-12-09 23:51:31.454089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:47.187 passed 00:09:47.187 Test: mem map adjacent registrations ...passed 00:09:47.187 00:09:47.187 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.187 suites 1 1 n/a 0 0 00:09:47.187 tests 4 4 4 0 0 00:09:47.187 asserts 152 152 152 0 n/a 00:09:47.187 00:09:47.187 Elapsed time = 0.123 seconds 00:09:47.187 00:09:47.187 real 0m0.132s 00:09:47.187 user 0m0.124s 00:09:47.187 sys 0m0.007s 00:09:47.187 23:51:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.187 23:51:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:47.187 ************************************ 00:09:47.187 END TEST env_memory 00:09:47.187 ************************************ 00:09:47.187 23:51:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:47.187 23:51:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.188 23:51:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.188 23:51:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.188 ************************************ 00:09:47.188 START TEST env_vtophys 00:09:47.188 ************************************ 00:09:47.188 23:51:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:47.188 EAL: lib.eal log level changed from notice to debug 00:09:47.188 EAL: Detected lcore 0 as core 0 on socket 0 00:09:47.188 EAL: Detected lcore 1 as core 1 on socket 0 00:09:47.188 EAL: Detected lcore 2 as core 2 on socket 0 00:09:47.188 EAL: Detected lcore 3 as core 3 on socket 0 00:09:47.188 EAL: Detected lcore 4 as core 4 on socket 0 00:09:47.188 EAL: Detected lcore 5 as core 5 on socket 0 00:09:47.188 EAL: Detected lcore 6 as core 6 on socket 0 00:09:47.188 EAL: Detected lcore 7 as core 8 on socket 0 00:09:47.188 EAL: Detected lcore 8 as core 9 on socket 0 00:09:47.188 EAL: Detected lcore 9 as core 10 on socket 0 00:09:47.188 EAL: Detected lcore 10 as core 11 on socket 0 00:09:47.188 EAL: Detected lcore 11 as core 12 on socket 0 00:09:47.188 EAL: Detected lcore 12 as core 13 on socket 0 00:09:47.188 EAL: Detected lcore 13 as core 14 on socket 0 00:09:47.188 EAL: Detected lcore 14 as core 16 on socket 0 00:09:47.188 EAL: Detected lcore 15 as core 17 on socket 0 00:09:47.188 EAL: Detected lcore 16 as core 18 on socket 0 00:09:47.188 EAL: Detected lcore 17 as core 19 on socket 0 00:09:47.188 EAL: Detected lcore 18 as core 20 on socket 0 00:09:47.188 EAL: Detected lcore 19 as core 21 on socket 0 00:09:47.188 EAL: Detected lcore 20 as core 22 on socket 0 00:09:47.188 EAL: Detected lcore 21 as core 24 on socket 0 00:09:47.188 EAL: Detected lcore 22 as core 25 on socket 0 00:09:47.188 EAL: Detected lcore 23 as core 26 on socket 0 00:09:47.188 EAL: Detected lcore 24 as core 27 on socket 0 00:09:47.188 EAL: Detected lcore 25 as core 28 on socket 0 00:09:47.188 EAL: Detected lcore 26 as core 29 on socket 0 00:09:47.188 EAL: Detected lcore 27 as core 30 on socket 0 00:09:47.188 EAL: Detected lcore 28 as core 0 on socket 1 00:09:47.188 EAL: Detected lcore 29 as core 1 on socket 1 00:09:47.188 EAL: Detected lcore 30 as core 2 on socket 1 00:09:47.188 EAL: Detected lcore 31 as core 3 on socket 1 00:09:47.188 EAL: Detected lcore 32 as core 4 on socket 1 00:09:47.188 EAL: Detected lcore 33 as core 5 on socket 1 00:09:47.188 EAL: Detected lcore 34 as core 6 on socket 1 00:09:47.188 EAL: Detected lcore 35 as core 8 on socket 1 00:09:47.188 EAL: Detected lcore 36 as core 9 on socket 1 00:09:47.188 EAL: Detected lcore 37 as core 10 on socket 1 00:09:47.188 EAL: Detected lcore 38 as core 11 on socket 1 00:09:47.188 EAL: Detected lcore 39 as core 12 on socket 1 00:09:47.188 EAL: Detected lcore 40 as core 13 on socket 1 00:09:47.188 EAL: Detected lcore 41 as core 14 on socket 1 00:09:47.188 EAL: Detected lcore 42 as core 16 on socket 1 00:09:47.188 EAL: Detected lcore 43 as core 17 on socket 1 00:09:47.188 EAL: Detected lcore 44 as core 18 on socket 1 00:09:47.188 EAL: Detected lcore 45 as core 19 on socket 1 00:09:47.188 EAL: Detected lcore 46 as core 20 on socket 1 00:09:47.188 EAL: Detected lcore 47 as core 21 on socket 1 00:09:47.188 EAL: Detected lcore 48 as core 22 on socket 1 00:09:47.188 EAL: Detected lcore 49 as core 24 on socket 1 00:09:47.188 EAL: Detected lcore 50 as core 25 on socket 1 00:09:47.188 EAL: Detected lcore 51 as core 26 on socket 1 00:09:47.188 EAL: Detected lcore 52 as core 27 on socket 1 00:09:47.188 EAL: Detected lcore 53 as core 28 on socket 1 00:09:47.188 EAL: Detected lcore 54 as core 29 on socket 1 00:09:47.188 EAL: Detected lcore 55 as core 30 on socket 1 00:09:47.188 EAL: Detected lcore 56 as core 0 on socket 0 00:09:47.188 EAL: Detected lcore 57 as core 1 on socket 0 00:09:47.188 EAL: Detected lcore 58 as core 2 on socket 0 00:09:47.188 EAL: Detected lcore 59 as core 3 on socket 0 00:09:47.188 EAL: Detected lcore 60 as core 4 on socket 0 00:09:47.188 EAL: Detected lcore 61 as core 5 on socket 0 00:09:47.188 EAL: Detected lcore 62 as core 6 on socket 0 00:09:47.188 EAL: Detected lcore 63 as core 8 on socket 0 00:09:47.188 EAL: Detected lcore 64 as core 9 on socket 0 00:09:47.188 EAL: Detected lcore 65 as core 10 on socket 0 00:09:47.188 EAL: Detected lcore 66 as core 11 on socket 0 00:09:47.188 EAL: Detected lcore 67 as core 12 on socket 0 00:09:47.188 EAL: Detected lcore 68 as core 13 on socket 0 00:09:47.188 EAL: Detected lcore 69 as core 14 on socket 0 00:09:47.188 EAL: Detected lcore 70 as core 16 on socket 0 00:09:47.188 EAL: Detected lcore 71 as core 17 on socket 0 00:09:47.188 EAL: Detected lcore 72 as core 18 on socket 0 00:09:47.188 EAL: Detected lcore 73 as core 19 on socket 0 00:09:47.188 EAL: Detected lcore 74 as core 20 on socket 0 00:09:47.188 EAL: Detected lcore 75 as core 21 on socket 0 00:09:47.188 EAL: Detected lcore 76 as core 22 on socket 0 00:09:47.188 EAL: Detected lcore 77 as core 24 on socket 0 00:09:47.188 EAL: Detected lcore 78 as core 25 on socket 0 00:09:47.188 EAL: Detected lcore 79 as core 26 on socket 0 00:09:47.188 EAL: Detected lcore 80 as core 27 on socket 0 00:09:47.188 EAL: Detected lcore 81 as core 28 on socket 0 00:09:47.188 EAL: Detected lcore 82 as core 29 on socket 0 00:09:47.188 EAL: Detected lcore 83 as core 30 on socket 0 00:09:47.188 EAL: Detected lcore 84 as core 0 on socket 1 00:09:47.188 EAL: Detected lcore 85 as core 1 on socket 1 00:09:47.188 EAL: Detected lcore 86 as core 2 on socket 1 00:09:47.188 EAL: Detected lcore 87 as core 3 on socket 1 00:09:47.188 EAL: Detected lcore 88 as core 4 on socket 1 00:09:47.188 EAL: Detected lcore 89 as core 5 on socket 1 00:09:47.188 EAL: Detected lcore 90 as core 6 on socket 1 00:09:47.188 EAL: Detected lcore 91 as core 8 on socket 1 00:09:47.188 EAL: Detected lcore 92 as core 9 on socket 1 00:09:47.188 EAL: Detected lcore 93 as core 10 on socket 1 00:09:47.188 EAL: Detected lcore 94 as core 11 on socket 1 00:09:47.188 EAL: Detected lcore 95 as core 12 on socket 1 00:09:47.188 EAL: Detected lcore 96 as core 13 on socket 1 00:09:47.188 EAL: Detected lcore 97 as core 14 on socket 1 00:09:47.188 EAL: Detected lcore 98 as core 16 on socket 1 00:09:47.188 EAL: Detected lcore 99 as core 17 on socket 1 00:09:47.188 EAL: Detected lcore 100 as core 18 on socket 1 00:09:47.188 EAL: Detected lcore 101 as core 19 on socket 1 00:09:47.188 EAL: Detected lcore 102 as core 20 on socket 1 00:09:47.188 EAL: Detected lcore 103 as core 21 on socket 1 00:09:47.188 EAL: Detected lcore 104 as core 22 on socket 1 00:09:47.188 EAL: Detected lcore 105 as core 24 on socket 1 00:09:47.188 EAL: Detected lcore 106 as core 25 on socket 1 00:09:47.188 EAL: Detected lcore 107 as core 26 on socket 1 00:09:47.188 EAL: Detected lcore 108 as core 27 on socket 1 00:09:47.188 EAL: Detected lcore 109 as core 28 on socket 1 00:09:47.188 EAL: Detected lcore 110 as core 29 on socket 1 00:09:47.188 EAL: Detected lcore 111 as core 30 on socket 1 00:09:47.188 EAL: Maximum logical cores by configuration: 128 00:09:47.188 EAL: Detected CPU lcores: 112 00:09:47.188 EAL: Detected NUMA nodes: 2 00:09:47.188 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:47.188 EAL: Detected shared linkage of DPDK 00:09:47.188 EAL: No shared files mode enabled, IPC will be disabled 00:09:47.188 EAL: Bus pci wants IOVA as 'DC' 00:09:47.188 EAL: Buses did not request a specific IOVA mode. 00:09:47.188 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:47.188 EAL: Selected IOVA mode 'VA' 00:09:47.188 EAL: Probing VFIO support... 00:09:47.188 EAL: IOMMU type 1 (Type 1) is supported 00:09:47.188 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:47.188 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:47.188 EAL: VFIO support initialized 00:09:47.188 EAL: Ask a virtual area of 0x2e000 bytes 00:09:47.188 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:47.188 EAL: Setting up physically contiguous memory... 00:09:47.188 EAL: Setting maximum number of open files to 524288 00:09:47.188 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:47.188 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:47.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:47.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:47.188 EAL: Ask a virtual area of 0x61000 bytes 00:09:47.188 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:47.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:47.188 EAL: Ask a virtual area of 0x400000000 bytes 00:09:47.188 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:47.188 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:47.188 EAL: Hugepages will be freed exactly as allocated. 00:09:47.189 EAL: No shared files mode enabled, IPC is disabled 00:09:47.189 EAL: No shared files mode enabled, IPC is disabled 00:09:47.189 EAL: TSC frequency is ~2500000 KHz 00:09:47.189 EAL: Main lcore 0 is ready (tid=7fa8927b9a00;cpuset=[0]) 00:09:47.189 EAL: Trying to obtain current memory policy. 00:09:47.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.189 EAL: Restoring previous memory policy: 0 00:09:47.189 EAL: request: mp_malloc_sync 00:09:47.189 EAL: No shared files mode enabled, IPC is disabled 00:09:47.189 EAL: Heap on socket 0 was expanded by 2MB 00:09:47.189 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:47.449 EAL: Mem event callback 'spdk:(nil)' registered 00:09:47.449 00:09:47.449 00:09:47.449 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.449 http://cunit.sourceforge.net/ 00:09:47.449 00:09:47.449 00:09:47.449 Suite: components_suite 00:09:47.449 Test: vtophys_malloc_test ...passed 00:09:47.449 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:47.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.449 EAL: Restoring previous memory policy: 4 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was expanded by 4MB 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was shrunk by 4MB 00:09:47.449 EAL: Trying to obtain current memory policy. 00:09:47.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.449 EAL: Restoring previous memory policy: 4 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was expanded by 6MB 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was shrunk by 6MB 00:09:47.449 EAL: Trying to obtain current memory policy. 00:09:47.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.449 EAL: Restoring previous memory policy: 4 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was expanded by 10MB 00:09:47.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.449 EAL: request: mp_malloc_sync 00:09:47.449 EAL: No shared files mode enabled, IPC is disabled 00:09:47.449 EAL: Heap on socket 0 was shrunk by 10MB 00:09:47.450 EAL: Trying to obtain current memory policy. 00:09:47.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.450 EAL: Restoring previous memory policy: 4 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was expanded by 18MB 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was shrunk by 18MB 00:09:47.450 EAL: Trying to obtain current memory policy. 00:09:47.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.450 EAL: Restoring previous memory policy: 4 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was expanded by 34MB 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was shrunk by 34MB 00:09:47.450 EAL: Trying to obtain current memory policy. 00:09:47.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.450 EAL: Restoring previous memory policy: 4 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was expanded by 66MB 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was shrunk by 66MB 00:09:47.450 EAL: Trying to obtain current memory policy. 00:09:47.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.450 EAL: Restoring previous memory policy: 4 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was expanded by 130MB 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was shrunk by 130MB 00:09:47.450 EAL: Trying to obtain current memory policy. 00:09:47.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.450 EAL: Restoring previous memory policy: 4 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.450 EAL: request: mp_malloc_sync 00:09:47.450 EAL: No shared files mode enabled, IPC is disabled 00:09:47.450 EAL: Heap on socket 0 was expanded by 258MB 00:09:47.450 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.710 EAL: request: mp_malloc_sync 00:09:47.710 EAL: No shared files mode enabled, IPC is disabled 00:09:47.710 EAL: Heap on socket 0 was shrunk by 258MB 00:09:47.710 EAL: Trying to obtain current memory policy. 00:09:47.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.710 EAL: Restoring previous memory policy: 4 00:09:47.710 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.710 EAL: request: mp_malloc_sync 00:09:47.710 EAL: No shared files mode enabled, IPC is disabled 00:09:47.710 EAL: Heap on socket 0 was expanded by 514MB 00:09:47.710 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.970 EAL: request: mp_malloc_sync 00:09:47.970 EAL: No shared files mode enabled, IPC is disabled 00:09:47.970 EAL: Heap on socket 0 was shrunk by 514MB 00:09:47.970 EAL: Trying to obtain current memory policy. 00:09:47.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.970 EAL: Restoring previous memory policy: 4 00:09:47.970 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.970 EAL: request: mp_malloc_sync 00:09:47.970 EAL: No shared files mode enabled, IPC is disabled 00:09:47.970 EAL: Heap on socket 0 was expanded by 1026MB 00:09:48.230 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.493 EAL: request: mp_malloc_sync 00:09:48.493 EAL: No shared files mode enabled, IPC is disabled 00:09:48.493 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:48.493 passed 00:09:48.493 00:09:48.493 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.493 suites 1 1 n/a 0 0 00:09:48.493 tests 2 2 2 0 0 00:09:48.493 asserts 497 497 497 0 n/a 00:09:48.493 00:09:48.493 Elapsed time = 0.978 seconds 00:09:48.493 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.493 EAL: request: mp_malloc_sync 00:09:48.493 EAL: No shared files mode enabled, IPC is disabled 00:09:48.493 EAL: Heap on socket 0 was shrunk by 2MB 00:09:48.493 EAL: No shared files mode enabled, IPC is disabled 00:09:48.493 EAL: No shared files mode enabled, IPC is disabled 00:09:48.493 EAL: No shared files mode enabled, IPC is disabled 00:09:48.493 00:09:48.493 real 0m1.132s 00:09:48.493 user 0m0.668s 00:09:48.493 sys 0m0.434s 00:09:48.493 23:51:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.493 23:51:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 ************************************ 00:09:48.493 END TEST env_vtophys 00:09:48.493 ************************************ 00:09:48.493 23:51:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:48.493 23:51:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.493 23:51:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.493 23:51:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 ************************************ 00:09:48.493 START TEST env_pci 00:09:48.493 ************************************ 00:09:48.493 23:51:32 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:48.493 00:09:48.493 00:09:48.493 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.493 http://cunit.sourceforge.net/ 00:09:48.493 00:09:48.493 00:09:48.493 Suite: pci 00:09:48.493 Test: pci_hook ...[2024-12-09 23:51:32.819359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 211716 has claimed it 00:09:48.493 EAL: Cannot find device (10000:00:01.0) 00:09:48.493 EAL: Failed to attach device on primary process 00:09:48.493 passed 00:09:48.493 00:09:48.493 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.494 suites 1 1 n/a 0 0 00:09:48.494 tests 1 1 1 0 0 00:09:48.494 asserts 25 25 25 0 n/a 00:09:48.494 00:09:48.494 Elapsed time = 0.033 seconds 00:09:48.494 00:09:48.494 real 0m0.055s 00:09:48.494 user 0m0.017s 00:09:48.494 sys 0m0.038s 00:09:48.494 23:51:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.494 23:51:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:48.494 ************************************ 00:09:48.494 END TEST env_pci 00:09:48.494 ************************************ 00:09:48.494 23:51:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:48.494 23:51:32 env -- env/env.sh@15 -- # uname 00:09:48.494 23:51:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:48.494 23:51:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:48.494 23:51:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:48.494 23:51:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.494 23:51:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.494 23:51:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:48.494 ************************************ 00:09:48.494 START TEST env_dpdk_post_init 00:09:48.494 ************************************ 00:09:48.494 23:51:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:48.755 EAL: Detected CPU lcores: 112 00:09:48.755 EAL: Detected NUMA nodes: 2 00:09:48.755 EAL: Detected shared linkage of DPDK 00:09:48.755 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:48.755 EAL: Selected IOVA mode 'VA' 00:09:48.755 EAL: VFIO support initialized 00:09:48.755 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:48.755 EAL: Using IOMMU type 1 (Type 1) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:09:48.755 EAL: Ignore mapping IO port bar(1) 00:09:48.755 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:09:49.016 EAL: Ignore mapping IO port bar(1) 00:09:49.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:09:49.587 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:09:53.788 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:09:53.788 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:09:53.788 Starting DPDK initialization... 00:09:53.788 Starting SPDK post initialization... 00:09:53.788 SPDK NVMe probe 00:09:53.788 Attaching to 0000:d8:00.0 00:09:53.788 Attached to 0000:d8:00.0 00:09:53.788 Cleaning up... 00:09:53.788 00:09:53.788 real 0m4.981s 00:09:53.788 user 0m3.433s 00:09:53.788 sys 0m0.602s 00:09:53.788 23:51:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.788 23:51:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 ************************************ 00:09:53.788 END TEST env_dpdk_post_init 00:09:53.788 ************************************ 00:09:53.788 23:51:37 env -- env/env.sh@26 -- # uname 00:09:53.788 23:51:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:53.788 23:51:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:53.788 23:51:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.788 23:51:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.788 23:51:37 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 ************************************ 00:09:53.788 START TEST env_mem_callbacks 00:09:53.788 ************************************ 00:09:53.788 23:51:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:53.788 EAL: Detected CPU lcores: 112 00:09:53.788 EAL: Detected NUMA nodes: 2 00:09:53.788 EAL: Detected shared linkage of DPDK 00:09:53.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:53.788 EAL: Selected IOVA mode 'VA' 00:09:53.788 EAL: VFIO support initialized 00:09:53.788 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:53.788 00:09:53.788 00:09:53.788 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.788 http://cunit.sourceforge.net/ 00:09:53.788 00:09:53.788 00:09:53.788 Suite: memory 00:09:53.788 Test: test ... 00:09:53.788 register 0x200000200000 2097152 00:09:53.788 malloc 3145728 00:09:53.788 register 0x200000400000 4194304 00:09:53.788 buf 0x200000500000 len 3145728 PASSED 00:09:53.788 malloc 64 00:09:53.788 buf 0x2000004fff40 len 64 PASSED 00:09:53.788 malloc 4194304 00:09:53.788 register 0x200000800000 6291456 00:09:53.788 buf 0x200000a00000 len 4194304 PASSED 00:09:53.788 free 0x200000500000 3145728 00:09:53.788 free 0x2000004fff40 64 00:09:53.788 unregister 0x200000400000 4194304 PASSED 00:09:53.788 free 0x200000a00000 4194304 00:09:53.788 unregister 0x200000800000 6291456 PASSED 00:09:53.788 malloc 8388608 00:09:53.788 register 0x200000400000 10485760 00:09:53.788 buf 0x200000600000 len 8388608 PASSED 00:09:53.788 free 0x200000600000 8388608 00:09:53.788 unregister 0x200000400000 10485760 PASSED 00:09:53.788 passed 00:09:53.788 00:09:53.788 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.788 suites 1 1 n/a 0 0 00:09:53.788 tests 1 1 1 0 0 00:09:53.788 asserts 15 15 15 0 n/a 00:09:53.788 00:09:53.788 Elapsed time = 0.008 seconds 00:09:53.788 00:09:53.788 real 0m0.069s 00:09:53.788 user 0m0.020s 00:09:53.788 sys 0m0.048s 00:09:53.788 23:51:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.788 23:51:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 ************************************ 00:09:53.788 END TEST env_mem_callbacks 00:09:53.788 ************************************ 00:09:53.788 00:09:53.788 real 0m6.996s 00:09:53.788 user 0m4.518s 00:09:53.788 sys 0m1.550s 00:09:53.788 23:51:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.788 23:51:38 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 ************************************ 00:09:53.788 END TEST env 00:09:53.788 ************************************ 00:09:53.788 23:51:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:53.788 23:51:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.788 23:51:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.788 23:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 ************************************ 00:09:53.788 START TEST rpc 00:09:53.788 ************************************ 00:09:53.788 23:51:38 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:54.049 * Looking for test storage... 00:09:54.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.049 23:51:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.049 23:51:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.049 23:51:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.049 23:51:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.049 23:51:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.049 23:51:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:54.049 23:51:38 rpc -- scripts/common.sh@345 -- # : 1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.049 23:51:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.049 23:51:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@353 -- # local d=1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.049 23:51:38 rpc -- scripts/common.sh@355 -- # echo 1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.049 23:51:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@353 -- # local d=2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.049 23:51:38 rpc -- scripts/common.sh@355 -- # echo 2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.049 23:51:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.049 23:51:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.049 23:51:38 rpc -- scripts/common.sh@368 -- # return 0 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.049 --rc genhtml_branch_coverage=1 00:09:54.049 --rc genhtml_function_coverage=1 00:09:54.049 --rc genhtml_legend=1 00:09:54.049 --rc geninfo_all_blocks=1 00:09:54.049 --rc geninfo_unexecuted_blocks=1 00:09:54.049 00:09:54.049 ' 00:09:54.049 23:51:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.049 --rc genhtml_branch_coverage=1 00:09:54.050 --rc genhtml_function_coverage=1 00:09:54.050 --rc genhtml_legend=1 00:09:54.050 --rc geninfo_all_blocks=1 00:09:54.050 --rc geninfo_unexecuted_blocks=1 00:09:54.050 00:09:54.050 ' 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.050 --rc genhtml_branch_coverage=1 00:09:54.050 --rc genhtml_function_coverage=1 00:09:54.050 --rc genhtml_legend=1 00:09:54.050 --rc geninfo_all_blocks=1 00:09:54.050 --rc geninfo_unexecuted_blocks=1 00:09:54.050 00:09:54.050 ' 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.050 --rc genhtml_branch_coverage=1 00:09:54.050 --rc genhtml_function_coverage=1 00:09:54.050 --rc genhtml_legend=1 00:09:54.050 --rc geninfo_all_blocks=1 00:09:54.050 --rc geninfo_unexecuted_blocks=1 00:09:54.050 00:09:54.050 ' 00:09:54.050 23:51:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:54.050 23:51:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=212667 00:09:54.050 23:51:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.050 23:51:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 212667 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 212667 ']' 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.050 23:51:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.050 [2024-12-09 23:51:38.456241] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:09:54.050 [2024-12-09 23:51:38.456294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid212667 ] 00:09:54.311 [2024-12-09 23:51:38.546622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.311 [2024-12-09 23:51:38.584311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:54.311 [2024-12-09 23:51:38.584346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 212667' to capture a snapshot of events at runtime. 00:09:54.311 [2024-12-09 23:51:38.584355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.311 [2024-12-09 23:51:38.584363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.311 [2024-12-09 23:51:38.584370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid212667 for offline analysis/debug. 00:09:54.311 [2024-12-09 23:51:38.584939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.881 23:51:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.881 23:51:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.881 23:51:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:54.881 23:51:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:54.881 23:51:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:54.881 23:51:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:54.881 23:51:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.881 23:51:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.881 23:51:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.881 ************************************ 00:09:54.881 START TEST rpc_integrity 00:09:54.881 ************************************ 00:09:54.881 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:54.881 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.881 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.881 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.881 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.882 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.882 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:55.142 { 00:09:55.142 "name": "Malloc0", 00:09:55.142 "aliases": [ 00:09:55.142 "9fb7ad3c-bbb6-424b-a1ff-94e37c479fef" 00:09:55.142 ], 00:09:55.142 "product_name": "Malloc disk", 00:09:55.142 "block_size": 512, 00:09:55.142 "num_blocks": 16384, 00:09:55.142 "uuid": "9fb7ad3c-bbb6-424b-a1ff-94e37c479fef", 00:09:55.142 "assigned_rate_limits": { 00:09:55.142 "rw_ios_per_sec": 0, 00:09:55.142 "rw_mbytes_per_sec": 0, 00:09:55.142 "r_mbytes_per_sec": 0, 00:09:55.142 "w_mbytes_per_sec": 0 00:09:55.142 }, 00:09:55.142 "claimed": false, 00:09:55.142 "zoned": false, 00:09:55.142 "supported_io_types": { 00:09:55.142 "read": true, 00:09:55.142 "write": true, 00:09:55.142 "unmap": true, 00:09:55.142 "flush": true, 00:09:55.142 "reset": true, 00:09:55.142 "nvme_admin": false, 00:09:55.142 "nvme_io": false, 00:09:55.142 "nvme_io_md": false, 00:09:55.142 "write_zeroes": true, 00:09:55.142 "zcopy": true, 00:09:55.142 "get_zone_info": false, 00:09:55.142 "zone_management": false, 00:09:55.142 "zone_append": false, 00:09:55.142 "compare": false, 00:09:55.142 "compare_and_write": false, 00:09:55.142 "abort": true, 00:09:55.142 "seek_hole": false, 00:09:55.142 "seek_data": false, 00:09:55.142 "copy": true, 00:09:55.142 "nvme_iov_md": false 00:09:55.142 }, 00:09:55.142 "memory_domains": [ 00:09:55.142 { 00:09:55.142 "dma_device_id": "system", 00:09:55.142 "dma_device_type": 1 00:09:55.142 }, 00:09:55.142 { 00:09:55.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.142 "dma_device_type": 2 00:09:55.142 } 00:09:55.142 ], 00:09:55.142 "driver_specific": {} 00:09:55.142 } 00:09:55.142 ]' 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.142 [2024-12-09 23:51:39.467815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:55.142 [2024-12-09 23:51:39.467853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.142 [2024-12-09 23:51:39.467867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225d5a0 00:09:55.142 [2024-12-09 23:51:39.467876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.142 [2024-12-09 23:51:39.468988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.142 [2024-12-09 23:51:39.469013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:55.142 Passthru0 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.142 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.142 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:55.143 { 00:09:55.143 "name": "Malloc0", 00:09:55.143 "aliases": [ 00:09:55.143 "9fb7ad3c-bbb6-424b-a1ff-94e37c479fef" 00:09:55.143 ], 00:09:55.143 "product_name": "Malloc disk", 00:09:55.143 "block_size": 512, 00:09:55.143 "num_blocks": 16384, 00:09:55.143 "uuid": "9fb7ad3c-bbb6-424b-a1ff-94e37c479fef", 00:09:55.143 "assigned_rate_limits": { 00:09:55.143 "rw_ios_per_sec": 0, 00:09:55.143 "rw_mbytes_per_sec": 0, 00:09:55.143 "r_mbytes_per_sec": 0, 00:09:55.143 "w_mbytes_per_sec": 0 00:09:55.143 }, 00:09:55.143 "claimed": true, 00:09:55.143 "claim_type": "exclusive_write", 00:09:55.143 "zoned": false, 00:09:55.143 "supported_io_types": { 00:09:55.143 "read": true, 00:09:55.143 "write": true, 00:09:55.143 "unmap": true, 00:09:55.143 "flush": true, 00:09:55.143 "reset": true, 00:09:55.143 "nvme_admin": false, 00:09:55.143 "nvme_io": false, 00:09:55.143 "nvme_io_md": false, 00:09:55.143 "write_zeroes": true, 00:09:55.143 "zcopy": true, 00:09:55.143 "get_zone_info": false, 00:09:55.143 "zone_management": false, 00:09:55.143 "zone_append": false, 00:09:55.143 "compare": false, 00:09:55.143 "compare_and_write": false, 00:09:55.143 "abort": true, 00:09:55.143 "seek_hole": false, 00:09:55.143 "seek_data": false, 00:09:55.143 "copy": true, 00:09:55.143 "nvme_iov_md": false 00:09:55.143 }, 00:09:55.143 "memory_domains": [ 00:09:55.143 { 00:09:55.143 "dma_device_id": "system", 00:09:55.143 "dma_device_type": 1 00:09:55.143 }, 00:09:55.143 { 00:09:55.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.143 "dma_device_type": 2 00:09:55.143 } 00:09:55.143 ], 00:09:55.143 "driver_specific": {} 00:09:55.143 }, 00:09:55.143 { 00:09:55.143 "name": "Passthru0", 00:09:55.143 "aliases": [ 00:09:55.143 "a9e2fc33-eb77-5133-a2fe-3cdca8b8061e" 00:09:55.143 ], 00:09:55.143 "product_name": "passthru", 00:09:55.143 "block_size": 512, 00:09:55.143 "num_blocks": 16384, 00:09:55.143 "uuid": "a9e2fc33-eb77-5133-a2fe-3cdca8b8061e", 00:09:55.143 "assigned_rate_limits": { 00:09:55.143 "rw_ios_per_sec": 0, 00:09:55.143 "rw_mbytes_per_sec": 0, 00:09:55.143 "r_mbytes_per_sec": 0, 00:09:55.143 "w_mbytes_per_sec": 0 00:09:55.143 }, 00:09:55.143 "claimed": false, 00:09:55.143 "zoned": false, 00:09:55.143 "supported_io_types": { 00:09:55.143 "read": true, 00:09:55.143 "write": true, 00:09:55.143 "unmap": true, 00:09:55.143 "flush": true, 00:09:55.143 "reset": true, 00:09:55.143 "nvme_admin": false, 00:09:55.143 "nvme_io": false, 00:09:55.143 "nvme_io_md": false, 00:09:55.143 "write_zeroes": true, 00:09:55.143 "zcopy": true, 00:09:55.143 "get_zone_info": false, 00:09:55.143 "zone_management": false, 00:09:55.143 "zone_append": false, 00:09:55.143 "compare": false, 00:09:55.143 "compare_and_write": false, 00:09:55.143 "abort": true, 00:09:55.143 "seek_hole": false, 00:09:55.143 "seek_data": false, 00:09:55.143 "copy": true, 00:09:55.143 "nvme_iov_md": false 00:09:55.143 }, 00:09:55.143 "memory_domains": [ 00:09:55.143 { 00:09:55.143 "dma_device_id": "system", 00:09:55.143 "dma_device_type": 1 00:09:55.143 }, 00:09:55.143 { 00:09:55.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.143 "dma_device_type": 2 00:09:55.143 } 00:09:55.143 ], 00:09:55.143 "driver_specific": { 00:09:55.143 "passthru": { 00:09:55.143 "name": "Passthru0", 00:09:55.143 "base_bdev_name": "Malloc0" 00:09:55.143 } 00:09:55.143 } 00:09:55.143 } 00:09:55.143 ]' 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.143 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:55.143 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:55.403 23:51:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:55.403 00:09:55.403 real 0m0.294s 00:09:55.403 user 0m0.182s 00:09:55.403 sys 0m0.048s 00:09:55.403 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.403 23:51:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.403 ************************************ 00:09:55.403 END TEST rpc_integrity 00:09:55.403 ************************************ 00:09:55.403 23:51:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:55.403 23:51:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.403 23:51:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.403 23:51:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.403 ************************************ 00:09:55.403 START TEST rpc_plugins 00:09:55.403 ************************************ 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:55.403 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.403 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:55.403 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:55.403 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.403 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:55.403 { 00:09:55.403 "name": "Malloc1", 00:09:55.403 "aliases": [ 00:09:55.403 "7b456451-0091-4f59-b8fb-4dcc5e093eef" 00:09:55.403 ], 00:09:55.403 "product_name": "Malloc disk", 00:09:55.403 "block_size": 4096, 00:09:55.403 "num_blocks": 256, 00:09:55.403 "uuid": "7b456451-0091-4f59-b8fb-4dcc5e093eef", 00:09:55.403 "assigned_rate_limits": { 00:09:55.403 "rw_ios_per_sec": 0, 00:09:55.403 "rw_mbytes_per_sec": 0, 00:09:55.403 "r_mbytes_per_sec": 0, 00:09:55.403 "w_mbytes_per_sec": 0 00:09:55.403 }, 00:09:55.403 "claimed": false, 00:09:55.403 "zoned": false, 00:09:55.403 "supported_io_types": { 00:09:55.403 "read": true, 00:09:55.403 "write": true, 00:09:55.403 "unmap": true, 00:09:55.403 "flush": true, 00:09:55.403 "reset": true, 00:09:55.403 "nvme_admin": false, 00:09:55.403 "nvme_io": false, 00:09:55.403 "nvme_io_md": false, 00:09:55.403 "write_zeroes": true, 00:09:55.403 "zcopy": true, 00:09:55.403 "get_zone_info": false, 00:09:55.403 "zone_management": false, 00:09:55.403 "zone_append": false, 00:09:55.403 "compare": false, 00:09:55.403 "compare_and_write": false, 00:09:55.403 "abort": true, 00:09:55.403 "seek_hole": false, 00:09:55.403 "seek_data": false, 00:09:55.403 "copy": true, 00:09:55.403 "nvme_iov_md": false 00:09:55.403 }, 00:09:55.403 "memory_domains": [ 00:09:55.403 { 00:09:55.403 "dma_device_id": "system", 00:09:55.403 "dma_device_type": 1 00:09:55.403 }, 00:09:55.403 { 00:09:55.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.403 "dma_device_type": 2 00:09:55.403 } 00:09:55.403 ], 00:09:55.404 "driver_specific": {} 00:09:55.404 } 00:09:55.404 ]' 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:55.404 23:51:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:55.404 00:09:55.404 real 0m0.143s 00:09:55.404 user 0m0.081s 00:09:55.404 sys 0m0.027s 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.404 23:51:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:55.404 ************************************ 00:09:55.404 END TEST rpc_plugins 00:09:55.404 ************************************ 00:09:55.664 23:51:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:55.664 23:51:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.664 23:51:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.664 23:51:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 ************************************ 00:09:55.664 START TEST rpc_trace_cmd_test 00:09:55.664 ************************************ 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:55.664 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid212667", 00:09:55.664 "tpoint_group_mask": "0x8", 00:09:55.664 "iscsi_conn": { 00:09:55.664 "mask": "0x2", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "scsi": { 00:09:55.664 "mask": "0x4", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "bdev": { 00:09:55.664 "mask": "0x8", 00:09:55.664 "tpoint_mask": "0xffffffffffffffff" 00:09:55.664 }, 00:09:55.664 "nvmf_rdma": { 00:09:55.664 "mask": "0x10", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "nvmf_tcp": { 00:09:55.664 "mask": "0x20", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "ftl": { 00:09:55.664 "mask": "0x40", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "blobfs": { 00:09:55.664 "mask": "0x80", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "dsa": { 00:09:55.664 "mask": "0x200", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "thread": { 00:09:55.664 "mask": "0x400", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "nvme_pcie": { 00:09:55.664 "mask": "0x800", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "iaa": { 00:09:55.664 "mask": "0x1000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "nvme_tcp": { 00:09:55.664 "mask": "0x2000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "bdev_nvme": { 00:09:55.664 "mask": "0x4000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "sock": { 00:09:55.664 "mask": "0x8000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "blob": { 00:09:55.664 "mask": "0x10000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "bdev_raid": { 00:09:55.664 "mask": "0x20000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 }, 00:09:55.664 "scheduler": { 00:09:55.664 "mask": "0x40000", 00:09:55.664 "tpoint_mask": "0x0" 00:09:55.664 } 00:09:55.664 }' 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:55.664 23:51:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:55.664 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:55.925 23:51:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:55.925 00:09:55.925 real 0m0.246s 00:09:55.925 user 0m0.200s 00:09:55.925 sys 0m0.039s 00:09:55.925 23:51:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.925 23:51:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 ************************************ 00:09:55.925 END TEST rpc_trace_cmd_test 00:09:55.925 ************************************ 00:09:55.925 23:51:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:55.925 23:51:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:55.925 23:51:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:55.925 23:51:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.925 23:51:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.925 23:51:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 ************************************ 00:09:55.925 START TEST rpc_daemon_integrity 00:09:55.925 ************************************ 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.925 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:55.925 { 00:09:55.925 "name": "Malloc2", 00:09:55.925 "aliases": [ 00:09:55.925 "6286130d-6219-4add-b948-8156749a7395" 00:09:55.925 ], 00:09:55.925 "product_name": "Malloc disk", 00:09:55.925 "block_size": 512, 00:09:55.925 "num_blocks": 16384, 00:09:55.925 "uuid": "6286130d-6219-4add-b948-8156749a7395", 00:09:55.925 "assigned_rate_limits": { 00:09:55.925 "rw_ios_per_sec": 0, 00:09:55.925 "rw_mbytes_per_sec": 0, 00:09:55.925 "r_mbytes_per_sec": 0, 00:09:55.925 "w_mbytes_per_sec": 0 00:09:55.925 }, 00:09:55.925 "claimed": false, 00:09:55.925 "zoned": false, 00:09:55.925 "supported_io_types": { 00:09:55.925 "read": true, 00:09:55.926 "write": true, 00:09:55.926 "unmap": true, 00:09:55.926 "flush": true, 00:09:55.926 "reset": true, 00:09:55.926 "nvme_admin": false, 00:09:55.926 "nvme_io": false, 00:09:55.926 "nvme_io_md": false, 00:09:55.926 "write_zeroes": true, 00:09:55.926 "zcopy": true, 00:09:55.926 "get_zone_info": false, 00:09:55.926 "zone_management": false, 00:09:55.926 "zone_append": false, 00:09:55.926 "compare": false, 00:09:55.926 "compare_and_write": false, 00:09:55.926 "abort": true, 00:09:55.926 "seek_hole": false, 00:09:55.926 "seek_data": false, 00:09:55.926 "copy": true, 00:09:55.926 "nvme_iov_md": false 00:09:55.926 }, 00:09:55.926 "memory_domains": [ 00:09:55.926 { 00:09:55.926 "dma_device_id": "system", 00:09:55.926 "dma_device_type": 1 00:09:55.926 }, 00:09:55.926 { 00:09:55.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.926 "dma_device_type": 2 00:09:55.926 } 00:09:55.926 ], 00:09:55.926 "driver_specific": {} 00:09:55.926 } 00:09:55.926 ]' 00:09:55.926 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:55.926 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:55.926 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.186 [2024-12-09 23:51:40.402318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:56.186 [2024-12-09 23:51:40.402351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.186 [2024-12-09 23:51:40.402364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21aa760 00:09:56.186 [2024-12-09 23:51:40.402372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.186 [2024-12-09 23:51:40.403340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.186 [2024-12-09 23:51:40.403363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:56.186 Passthru0 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.186 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:56.186 { 00:09:56.186 "name": "Malloc2", 00:09:56.186 "aliases": [ 00:09:56.186 "6286130d-6219-4add-b948-8156749a7395" 00:09:56.186 ], 00:09:56.186 "product_name": "Malloc disk", 00:09:56.186 "block_size": 512, 00:09:56.186 "num_blocks": 16384, 00:09:56.186 "uuid": "6286130d-6219-4add-b948-8156749a7395", 00:09:56.186 "assigned_rate_limits": { 00:09:56.186 "rw_ios_per_sec": 0, 00:09:56.186 "rw_mbytes_per_sec": 0, 00:09:56.186 "r_mbytes_per_sec": 0, 00:09:56.186 "w_mbytes_per_sec": 0 00:09:56.186 }, 00:09:56.186 "claimed": true, 00:09:56.186 "claim_type": "exclusive_write", 00:09:56.186 "zoned": false, 00:09:56.186 "supported_io_types": { 00:09:56.186 "read": true, 00:09:56.186 "write": true, 00:09:56.186 "unmap": true, 00:09:56.186 "flush": true, 00:09:56.186 "reset": true, 00:09:56.186 "nvme_admin": false, 00:09:56.186 "nvme_io": false, 00:09:56.186 "nvme_io_md": false, 00:09:56.186 "write_zeroes": true, 00:09:56.186 "zcopy": true, 00:09:56.186 "get_zone_info": false, 00:09:56.186 "zone_management": false, 00:09:56.186 "zone_append": false, 00:09:56.186 "compare": false, 00:09:56.186 "compare_and_write": false, 00:09:56.186 "abort": true, 00:09:56.186 "seek_hole": false, 00:09:56.186 "seek_data": false, 00:09:56.186 "copy": true, 00:09:56.186 "nvme_iov_md": false 00:09:56.186 }, 00:09:56.186 "memory_domains": [ 00:09:56.186 { 00:09:56.186 "dma_device_id": "system", 00:09:56.186 "dma_device_type": 1 00:09:56.186 }, 00:09:56.186 { 00:09:56.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.186 "dma_device_type": 2 00:09:56.186 } 00:09:56.186 ], 00:09:56.186 "driver_specific": {} 00:09:56.186 }, 00:09:56.186 { 00:09:56.186 "name": "Passthru0", 00:09:56.186 "aliases": [ 00:09:56.186 "2de4358a-0bdb-5b5b-9329-c582b333b0bf" 00:09:56.186 ], 00:09:56.186 "product_name": "passthru", 00:09:56.186 "block_size": 512, 00:09:56.186 "num_blocks": 16384, 00:09:56.186 "uuid": "2de4358a-0bdb-5b5b-9329-c582b333b0bf", 00:09:56.187 "assigned_rate_limits": { 00:09:56.187 "rw_ios_per_sec": 0, 00:09:56.187 "rw_mbytes_per_sec": 0, 00:09:56.187 "r_mbytes_per_sec": 0, 00:09:56.187 "w_mbytes_per_sec": 0 00:09:56.187 }, 00:09:56.187 "claimed": false, 00:09:56.187 "zoned": false, 00:09:56.187 "supported_io_types": { 00:09:56.187 "read": true, 00:09:56.187 "write": true, 00:09:56.187 "unmap": true, 00:09:56.187 "flush": true, 00:09:56.187 "reset": true, 00:09:56.187 "nvme_admin": false, 00:09:56.187 "nvme_io": false, 00:09:56.187 "nvme_io_md": false, 00:09:56.187 "write_zeroes": true, 00:09:56.187 "zcopy": true, 00:09:56.187 "get_zone_info": false, 00:09:56.187 "zone_management": false, 00:09:56.187 "zone_append": false, 00:09:56.187 "compare": false, 00:09:56.187 "compare_and_write": false, 00:09:56.187 "abort": true, 00:09:56.187 "seek_hole": false, 00:09:56.187 "seek_data": false, 00:09:56.187 "copy": true, 00:09:56.187 "nvme_iov_md": false 00:09:56.187 }, 00:09:56.187 "memory_domains": [ 00:09:56.187 { 00:09:56.187 "dma_device_id": "system", 00:09:56.187 "dma_device_type": 1 00:09:56.187 }, 00:09:56.187 { 00:09:56.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.187 "dma_device_type": 2 00:09:56.187 } 00:09:56.187 ], 00:09:56.187 "driver_specific": { 00:09:56.187 "passthru": { 00:09:56.187 "name": "Passthru0", 00:09:56.187 "base_bdev_name": "Malloc2" 00:09:56.187 } 00:09:56.187 } 00:09:56.187 } 00:09:56.187 ]' 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:56.187 00:09:56.187 real 0m0.301s 00:09:56.187 user 0m0.192s 00:09:56.187 sys 0m0.051s 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.187 23:51:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.187 ************************************ 00:09:56.187 END TEST rpc_daemon_integrity 00:09:56.187 ************************************ 00:09:56.187 23:51:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:56.187 23:51:40 rpc -- rpc/rpc.sh@84 -- # killprocess 212667 00:09:56.187 23:51:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 212667 ']' 00:09:56.187 23:51:40 rpc -- common/autotest_common.sh@958 -- # kill -0 212667 00:09:56.187 23:51:40 rpc -- common/autotest_common.sh@959 -- # uname 00:09:56.187 23:51:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.187 23:51:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212667 00:09:56.448 23:51:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.448 23:51:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.448 23:51:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212667' 00:09:56.448 killing process with pid 212667 00:09:56.448 23:51:40 rpc -- common/autotest_common.sh@973 -- # kill 212667 00:09:56.448 23:51:40 rpc -- common/autotest_common.sh@978 -- # wait 212667 00:09:56.712 00:09:56.712 real 0m2.755s 00:09:56.712 user 0m3.493s 00:09:56.712 sys 0m0.867s 00:09:56.712 23:51:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.712 23:51:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 ************************************ 00:09:56.712 END TEST rpc 00:09:56.712 ************************************ 00:09:56.712 23:51:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:56.712 23:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.712 23:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.712 23:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 ************************************ 00:09:56.712 START TEST skip_rpc 00:09:56.712 ************************************ 00:09:56.712 23:51:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:56.712 * Looking for test storage... 00:09:56.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:56.712 23:51:41 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.712 23:51:41 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.712 23:51:41 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.973 23:51:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.973 --rc genhtml_branch_coverage=1 00:09:56.973 --rc genhtml_function_coverage=1 00:09:56.973 --rc genhtml_legend=1 00:09:56.973 --rc geninfo_all_blocks=1 00:09:56.973 --rc geninfo_unexecuted_blocks=1 00:09:56.973 00:09:56.973 ' 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.973 --rc genhtml_branch_coverage=1 00:09:56.973 --rc genhtml_function_coverage=1 00:09:56.973 --rc genhtml_legend=1 00:09:56.973 --rc geninfo_all_blocks=1 00:09:56.973 --rc geninfo_unexecuted_blocks=1 00:09:56.973 00:09:56.973 ' 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.973 --rc genhtml_branch_coverage=1 00:09:56.973 --rc genhtml_function_coverage=1 00:09:56.973 --rc genhtml_legend=1 00:09:56.973 --rc geninfo_all_blocks=1 00:09:56.973 --rc geninfo_unexecuted_blocks=1 00:09:56.973 00:09:56.973 ' 00:09:56.973 23:51:41 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.973 --rc genhtml_branch_coverage=1 00:09:56.973 --rc genhtml_function_coverage=1 00:09:56.973 --rc genhtml_legend=1 00:09:56.973 --rc geninfo_all_blocks=1 00:09:56.973 --rc geninfo_unexecuted_blocks=1 00:09:56.973 00:09:56.973 ' 00:09:56.973 23:51:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:56.973 23:51:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:56.974 23:51:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:56.974 23:51:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.974 23:51:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.974 23:51:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.974 ************************************ 00:09:56.974 START TEST skip_rpc 00:09:56.974 ************************************ 00:09:56.974 23:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:56.974 23:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=213376 00:09:56.974 23:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.974 23:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:56.974 23:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:56.974 [2024-12-09 23:51:41.334153] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:09:56.974 [2024-12-09 23:51:41.334195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213376 ] 00:09:56.974 [2024-12-09 23:51:41.424991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.234 [2024-12-09 23:51:41.464733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 213376 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 213376 ']' 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 213376 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213376 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213376' 00:10:02.514 killing process with pid 213376 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 213376 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 213376 00:10:02.514 00:10:02.514 real 0m5.382s 00:10:02.514 user 0m5.121s 00:10:02.514 sys 0m0.313s 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.514 23:51:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 ************************************ 00:10:02.514 END TEST skip_rpc 00:10:02.514 ************************************ 00:10:02.514 23:51:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:02.514 23:51:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.514 23:51:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.514 23:51:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 ************************************ 00:10:02.514 START TEST skip_rpc_with_json 00:10:02.514 ************************************ 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=214459 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 214459 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 214459 ']' 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.514 23:51:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 [2024-12-09 23:51:46.807287] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:02.514 [2024-12-09 23:51:46.807333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214459 ] 00:10:02.514 [2024-12-09 23:51:46.898533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.514 [2024-12-09 23:51:46.939901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.454 [2024-12-09 23:51:47.630756] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:03.454 request: 00:10:03.454 { 00:10:03.454 "trtype": "tcp", 00:10:03.454 "method": "nvmf_get_transports", 00:10:03.454 "req_id": 1 00:10:03.454 } 00:10:03.454 Got JSON-RPC error response 00:10:03.454 response: 00:10:03.454 { 00:10:03.454 "code": -19, 00:10:03.454 "message": "No such device" 00:10:03.454 } 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.454 [2024-12-09 23:51:47.642886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.454 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.455 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.455 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:03.455 { 00:10:03.455 "subsystems": [ 00:10:03.455 { 00:10:03.455 "subsystem": "fsdev", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "fsdev_set_opts", 00:10:03.455 "params": { 00:10:03.455 "fsdev_io_pool_size": 65535, 00:10:03.455 "fsdev_io_cache_size": 256 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "vfio_user_target", 00:10:03.455 "config": null 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "keyring", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "iobuf", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "iobuf_set_options", 00:10:03.455 "params": { 00:10:03.455 "small_pool_count": 8192, 00:10:03.455 "large_pool_count": 1024, 00:10:03.455 "small_bufsize": 8192, 00:10:03.455 "large_bufsize": 135168, 00:10:03.455 "enable_numa": false 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "sock", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "sock_set_default_impl", 00:10:03.455 "params": { 00:10:03.455 "impl_name": "posix" 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "sock_impl_set_options", 00:10:03.455 "params": { 00:10:03.455 "impl_name": "ssl", 00:10:03.455 "recv_buf_size": 4096, 00:10:03.455 "send_buf_size": 4096, 00:10:03.455 "enable_recv_pipe": true, 00:10:03.455 "enable_quickack": false, 00:10:03.455 "enable_placement_id": 0, 00:10:03.455 "enable_zerocopy_send_server": true, 00:10:03.455 "enable_zerocopy_send_client": false, 00:10:03.455 "zerocopy_threshold": 0, 00:10:03.455 "tls_version": 0, 00:10:03.455 "enable_ktls": false 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "sock_impl_set_options", 00:10:03.455 "params": { 00:10:03.455 "impl_name": "posix", 00:10:03.455 "recv_buf_size": 2097152, 00:10:03.455 "send_buf_size": 2097152, 00:10:03.455 "enable_recv_pipe": true, 00:10:03.455 "enable_quickack": false, 00:10:03.455 "enable_placement_id": 0, 00:10:03.455 "enable_zerocopy_send_server": true, 00:10:03.455 "enable_zerocopy_send_client": false, 00:10:03.455 "zerocopy_threshold": 0, 00:10:03.455 "tls_version": 0, 00:10:03.455 "enable_ktls": false 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "vmd", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "accel", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "accel_set_options", 00:10:03.455 "params": { 00:10:03.455 "small_cache_size": 128, 00:10:03.455 "large_cache_size": 16, 00:10:03.455 "task_count": 2048, 00:10:03.455 "sequence_count": 2048, 00:10:03.455 "buf_count": 2048 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "bdev", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "bdev_set_options", 00:10:03.455 "params": { 00:10:03.455 "bdev_io_pool_size": 65535, 00:10:03.455 "bdev_io_cache_size": 256, 00:10:03.455 "bdev_auto_examine": true, 00:10:03.455 "iobuf_small_cache_size": 128, 00:10:03.455 "iobuf_large_cache_size": 16 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "bdev_raid_set_options", 00:10:03.455 "params": { 00:10:03.455 "process_window_size_kb": 1024, 00:10:03.455 "process_max_bandwidth_mb_sec": 0 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "bdev_iscsi_set_options", 00:10:03.455 "params": { 00:10:03.455 "timeout_sec": 30 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "bdev_nvme_set_options", 00:10:03.455 "params": { 00:10:03.455 "action_on_timeout": "none", 00:10:03.455 "timeout_us": 0, 00:10:03.455 "timeout_admin_us": 0, 00:10:03.455 "keep_alive_timeout_ms": 10000, 00:10:03.455 "arbitration_burst": 0, 00:10:03.455 "low_priority_weight": 0, 00:10:03.455 "medium_priority_weight": 0, 00:10:03.455 "high_priority_weight": 0, 00:10:03.455 "nvme_adminq_poll_period_us": 10000, 00:10:03.455 "nvme_ioq_poll_period_us": 0, 00:10:03.455 "io_queue_requests": 0, 00:10:03.455 "delay_cmd_submit": true, 00:10:03.455 "transport_retry_count": 4, 00:10:03.455 "bdev_retry_count": 3, 00:10:03.455 "transport_ack_timeout": 0, 00:10:03.455 "ctrlr_loss_timeout_sec": 0, 00:10:03.455 "reconnect_delay_sec": 0, 00:10:03.455 "fast_io_fail_timeout_sec": 0, 00:10:03.455 "disable_auto_failback": false, 00:10:03.455 "generate_uuids": false, 00:10:03.455 "transport_tos": 0, 00:10:03.455 "nvme_error_stat": false, 00:10:03.455 "rdma_srq_size": 0, 00:10:03.455 "io_path_stat": false, 00:10:03.455 "allow_accel_sequence": false, 00:10:03.455 "rdma_max_cq_size": 0, 00:10:03.455 "rdma_cm_event_timeout_ms": 0, 00:10:03.455 "dhchap_digests": [ 00:10:03.455 "sha256", 00:10:03.455 "sha384", 00:10:03.455 "sha512" 00:10:03.455 ], 00:10:03.455 "dhchap_dhgroups": [ 00:10:03.455 "null", 00:10:03.455 "ffdhe2048", 00:10:03.455 "ffdhe3072", 00:10:03.455 "ffdhe4096", 00:10:03.455 "ffdhe6144", 00:10:03.455 "ffdhe8192" 00:10:03.455 ] 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "bdev_nvme_set_hotplug", 00:10:03.455 "params": { 00:10:03.455 "period_us": 100000, 00:10:03.455 "enable": false 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "bdev_wait_for_examine" 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "scsi", 00:10:03.455 "config": null 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "scheduler", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "framework_set_scheduler", 00:10:03.455 "params": { 00:10:03.455 "name": "static" 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "vhost_scsi", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "vhost_blk", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "ublk", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "nbd", 00:10:03.455 "config": [] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "nvmf", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "nvmf_set_config", 00:10:03.455 "params": { 00:10:03.455 "discovery_filter": "match_any", 00:10:03.455 "admin_cmd_passthru": { 00:10:03.455 "identify_ctrlr": false 00:10:03.455 }, 00:10:03.455 "dhchap_digests": [ 00:10:03.455 "sha256", 00:10:03.455 "sha384", 00:10:03.455 "sha512" 00:10:03.455 ], 00:10:03.455 "dhchap_dhgroups": [ 00:10:03.455 "null", 00:10:03.455 "ffdhe2048", 00:10:03.455 "ffdhe3072", 00:10:03.455 "ffdhe4096", 00:10:03.455 "ffdhe6144", 00:10:03.455 "ffdhe8192" 00:10:03.455 ] 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "nvmf_set_max_subsystems", 00:10:03.455 "params": { 00:10:03.455 "max_subsystems": 1024 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "nvmf_set_crdt", 00:10:03.455 "params": { 00:10:03.455 "crdt1": 0, 00:10:03.455 "crdt2": 0, 00:10:03.455 "crdt3": 0 00:10:03.455 } 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "method": "nvmf_create_transport", 00:10:03.455 "params": { 00:10:03.455 "trtype": "TCP", 00:10:03.455 "max_queue_depth": 128, 00:10:03.455 "max_io_qpairs_per_ctrlr": 127, 00:10:03.455 "in_capsule_data_size": 4096, 00:10:03.455 "max_io_size": 131072, 00:10:03.455 "io_unit_size": 131072, 00:10:03.455 "max_aq_depth": 128, 00:10:03.455 "num_shared_buffers": 511, 00:10:03.455 "buf_cache_size": 4294967295, 00:10:03.455 "dif_insert_or_strip": false, 00:10:03.455 "zcopy": false, 00:10:03.455 "c2h_success": true, 00:10:03.455 "sock_priority": 0, 00:10:03.455 "abort_timeout_sec": 1, 00:10:03.455 "ack_timeout": 0, 00:10:03.455 "data_wr_pool_size": 0 00:10:03.455 } 00:10:03.455 } 00:10:03.455 ] 00:10:03.455 }, 00:10:03.455 { 00:10:03.455 "subsystem": "iscsi", 00:10:03.455 "config": [ 00:10:03.455 { 00:10:03.455 "method": "iscsi_set_options", 00:10:03.455 "params": { 00:10:03.455 "node_base": "iqn.2016-06.io.spdk", 00:10:03.455 "max_sessions": 128, 00:10:03.455 "max_connections_per_session": 2, 00:10:03.455 "max_queue_depth": 64, 00:10:03.455 "default_time2wait": 2, 00:10:03.455 "default_time2retain": 20, 00:10:03.455 "first_burst_length": 8192, 00:10:03.455 "immediate_data": true, 00:10:03.455 "allow_duplicated_isid": false, 00:10:03.455 "error_recovery_level": 0, 00:10:03.455 "nop_timeout": 60, 00:10:03.455 "nop_in_interval": 30, 00:10:03.455 "disable_chap": false, 00:10:03.455 "require_chap": false, 00:10:03.456 "mutual_chap": false, 00:10:03.456 "chap_group": 0, 00:10:03.456 "max_large_datain_per_connection": 64, 00:10:03.456 "max_r2t_per_connection": 4, 00:10:03.456 "pdu_pool_size": 36864, 00:10:03.456 "immediate_data_pool_size": 16384, 00:10:03.456 "data_out_pool_size": 2048 00:10:03.456 } 00:10:03.456 } 00:10:03.456 ] 00:10:03.456 } 00:10:03.456 ] 00:10:03.456 } 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 214459 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 214459 ']' 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 214459 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214459 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214459' 00:10:03.456 killing process with pid 214459 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 214459 00:10:03.456 23:51:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 214459 00:10:04.027 23:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=214727 00:10:04.027 23:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:04.027 23:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 214727 ']' 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214727' 00:10:09.312 killing process with pid 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 214727 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:09.312 00:10:09.312 real 0m6.826s 00:10:09.312 user 0m6.626s 00:10:09.312 sys 0m0.700s 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.312 ************************************ 00:10:09.312 END TEST skip_rpc_with_json 00:10:09.312 ************************************ 00:10:09.312 23:51:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:09.312 23:51:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.312 23:51:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.312 23:51:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.312 ************************************ 00:10:09.312 START TEST skip_rpc_with_delay 00:10:09.312 ************************************ 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:09.312 [2024-12-09 23:51:53.723428] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.312 00:10:09.312 real 0m0.075s 00:10:09.312 user 0m0.046s 00:10:09.312 sys 0m0.029s 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.312 23:51:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:09.312 ************************************ 00:10:09.312 END TEST skip_rpc_with_delay 00:10:09.312 ************************************ 00:10:09.312 23:51:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:09.312 23:51:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:09.312 23:51:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:09.573 23:51:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.573 23:51:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.573 23:51:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.573 ************************************ 00:10:09.573 START TEST exit_on_failed_rpc_init 00:10:09.573 ************************************ 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=215590 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 215590 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 215590 ']' 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.573 23:51:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:09.573 [2024-12-09 23:51:53.885664] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:09.573 [2024-12-09 23:51:53.885713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid215590 ] 00:10:09.573 [2024-12-09 23:51:53.977071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.573 [2024-12-09 23:51:54.015756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:10.514 [2024-12-09 23:51:54.777317] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:10.514 [2024-12-09 23:51:54.777365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid215850 ] 00:10:10.514 [2024-12-09 23:51:54.864366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.514 [2024-12-09 23:51:54.903858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.514 [2024-12-09 23:51:54.903918] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:10.514 [2024-12-09 23:51:54.903930] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:10.514 [2024-12-09 23:51:54.903938] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:10.514 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 215590 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 215590 ']' 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 215590 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.515 23:51:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 215590 00:10:10.782 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.782 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.782 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 215590' 00:10:10.782 killing process with pid 215590 00:10:10.782 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 215590 00:10:10.782 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 215590 00:10:11.041 00:10:11.042 real 0m1.491s 00:10:11.042 user 0m1.646s 00:10:11.042 sys 0m0.497s 00:10:11.042 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.042 23:51:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:11.042 ************************************ 00:10:11.042 END TEST exit_on_failed_rpc_init 00:10:11.042 ************************************ 00:10:11.042 23:51:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:11.042 00:10:11.042 real 0m14.318s 00:10:11.042 user 0m13.670s 00:10:11.042 sys 0m1.896s 00:10:11.042 23:51:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.042 23:51:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.042 ************************************ 00:10:11.042 END TEST skip_rpc 00:10:11.042 ************************************ 00:10:11.042 23:51:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:11.042 23:51:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.042 23:51:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.042 23:51:55 -- common/autotest_common.sh@10 -- # set +x 00:10:11.042 ************************************ 00:10:11.042 START TEST rpc_client 00:10:11.042 ************************************ 00:10:11.042 23:51:55 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:11.302 * Looking for test storage... 00:10:11.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.302 23:51:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.302 --rc genhtml_branch_coverage=1 00:10:11.302 --rc genhtml_function_coverage=1 00:10:11.302 --rc genhtml_legend=1 00:10:11.302 --rc geninfo_all_blocks=1 00:10:11.302 --rc geninfo_unexecuted_blocks=1 00:10:11.302 00:10:11.302 ' 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.302 --rc genhtml_branch_coverage=1 00:10:11.302 --rc genhtml_function_coverage=1 00:10:11.302 --rc genhtml_legend=1 00:10:11.302 --rc geninfo_all_blocks=1 00:10:11.302 --rc geninfo_unexecuted_blocks=1 00:10:11.302 00:10:11.302 ' 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.302 --rc genhtml_branch_coverage=1 00:10:11.302 --rc genhtml_function_coverage=1 00:10:11.302 --rc genhtml_legend=1 00:10:11.302 --rc geninfo_all_blocks=1 00:10:11.302 --rc geninfo_unexecuted_blocks=1 00:10:11.302 00:10:11.302 ' 00:10:11.302 23:51:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.302 --rc genhtml_branch_coverage=1 00:10:11.302 --rc genhtml_function_coverage=1 00:10:11.302 --rc genhtml_legend=1 00:10:11.302 --rc geninfo_all_blocks=1 00:10:11.302 --rc geninfo_unexecuted_blocks=1 00:10:11.302 00:10:11.302 ' 00:10:11.302 23:51:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:11.302 OK 00:10:11.302 23:51:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:11.302 00:10:11.302 real 0m0.224s 00:10:11.302 user 0m0.128s 00:10:11.302 sys 0m0.113s 00:10:11.303 23:51:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.303 23:51:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:11.303 ************************************ 00:10:11.303 END TEST rpc_client 00:10:11.303 ************************************ 00:10:11.303 23:51:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:11.303 23:51:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.303 23:51:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.303 23:51:55 -- common/autotest_common.sh@10 -- # set +x 00:10:11.303 ************************************ 00:10:11.303 START TEST json_config 00:10:11.303 ************************************ 00:10:11.303 23:51:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.563 23:51:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.563 23:51:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.563 23:51:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.563 23:51:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.563 23:51:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.563 23:51:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:11.563 23:51:55 json_config -- scripts/common.sh@345 -- # : 1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.563 23:51:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.563 23:51:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@353 -- # local d=1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.563 23:51:55 json_config -- scripts/common.sh@355 -- # echo 1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.563 23:51:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@353 -- # local d=2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.563 23:51:55 json_config -- scripts/common.sh@355 -- # echo 2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.563 23:51:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.563 23:51:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.563 23:51:55 json_config -- scripts/common.sh@368 -- # return 0 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.563 --rc genhtml_branch_coverage=1 00:10:11.563 --rc genhtml_function_coverage=1 00:10:11.563 --rc genhtml_legend=1 00:10:11.563 --rc geninfo_all_blocks=1 00:10:11.563 --rc geninfo_unexecuted_blocks=1 00:10:11.563 00:10:11.563 ' 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.563 --rc genhtml_branch_coverage=1 00:10:11.563 --rc genhtml_function_coverage=1 00:10:11.563 --rc genhtml_legend=1 00:10:11.563 --rc geninfo_all_blocks=1 00:10:11.563 --rc geninfo_unexecuted_blocks=1 00:10:11.563 00:10:11.563 ' 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.563 --rc genhtml_branch_coverage=1 00:10:11.563 --rc genhtml_function_coverage=1 00:10:11.563 --rc genhtml_legend=1 00:10:11.563 --rc geninfo_all_blocks=1 00:10:11.563 --rc geninfo_unexecuted_blocks=1 00:10:11.563 00:10:11.563 ' 00:10:11.563 23:51:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.563 --rc genhtml_branch_coverage=1 00:10:11.563 --rc genhtml_function_coverage=1 00:10:11.563 --rc genhtml_legend=1 00:10:11.563 --rc geninfo_all_blocks=1 00:10:11.563 --rc geninfo_unexecuted_blocks=1 00:10:11.563 00:10:11.563 ' 00:10:11.563 23:51:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.563 23:51:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.563 23:51:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.563 23:51:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.564 23:51:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.564 23:51:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.564 23:51:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.564 23:51:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.564 23:51:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.564 23:51:55 json_config -- paths/export.sh@5 -- # export PATH 00:10:11.564 23:51:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@51 -- # : 0 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.564 23:51:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:10:11.564 INFO: JSON configuration test init 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:11.564 23:51:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:10:11.564 23:51:55 json_config -- json_config/common.sh@9 -- # local app=target 00:10:11.564 23:51:55 json_config -- json_config/common.sh@10 -- # shift 00:10:11.564 23:51:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:11.564 23:51:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:11.564 23:51:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:11.564 23:51:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:11.564 23:51:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:11.564 23:51:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=216234 00:10:11.564 23:51:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:11.564 Waiting for target to run... 00:10:11.564 23:51:55 json_config -- json_config/common.sh@25 -- # waitforlisten 216234 /var/tmp/spdk_tgt.sock 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 216234 ']' 00:10:11.564 23:51:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:11.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.564 23:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:11.564 [2024-12-09 23:51:56.027223] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:11.564 [2024-12-09 23:51:56.027275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216234 ] 00:10:12.149 [2024-12-09 23:51:56.324833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.149 [2024-12-09 23:51:56.357174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:12.409 23:51:56 json_config -- json_config/common.sh@26 -- # echo '' 00:10:12.409 00:10:12.409 23:51:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:10:12.409 23:51:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:12.409 23:51:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:10:12.409 23:51:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.409 23:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:12.670 23:51:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:12.670 23:51:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:10:12.670 23:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:15.969 23:52:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@51 -- # local get_types 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@54 -- # sort 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@62 -- # return 0 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.969 23:52:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:10:15.969 23:52:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:15.969 23:52:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:16.230 MallocForNvmf0 00:10:16.230 23:52:00 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:16.230 23:52:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:16.230 MallocForNvmf1 00:10:16.230 23:52:00 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:16.230 23:52:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:16.490 [2024-12-09 23:52:00.831952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.490 23:52:00 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.490 23:52:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.750 23:52:01 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:16.750 23:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:16.750 23:52:01 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:16.750 23:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:17.009 23:52:01 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:17.009 23:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:17.269 [2024-12-09 23:52:01.534142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:17.269 23:52:01 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:10:17.269 23:52:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.269 23:52:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:17.269 23:52:01 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:10:17.269 23:52:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.269 23:52:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:17.269 23:52:01 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:10:17.269 23:52:01 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:17.269 23:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:17.529 MallocBdevForConfigChangeCheck 00:10:17.529 23:52:01 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:10:17.529 23:52:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.529 23:52:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:17.529 23:52:01 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:10:17.529 23:52:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:17.790 23:52:02 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:10:17.790 INFO: shutting down applications... 00:10:17.790 23:52:02 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:10:17.790 23:52:02 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:10:17.790 23:52:02 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:10:17.790 23:52:02 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:20.330 Calling clear_iscsi_subsystem 00:10:20.331 Calling clear_nvmf_subsystem 00:10:20.331 Calling clear_nbd_subsystem 00:10:20.331 Calling clear_ublk_subsystem 00:10:20.331 Calling clear_vhost_blk_subsystem 00:10:20.331 Calling clear_vhost_scsi_subsystem 00:10:20.331 Calling clear_bdev_subsystem 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@352 -- # break 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:20.331 23:52:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:20.331 23:52:04 json_config -- json_config/common.sh@31 -- # local app=target 00:10:20.331 23:52:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:20.331 23:52:04 json_config -- json_config/common.sh@35 -- # [[ -n 216234 ]] 00:10:20.331 23:52:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 216234 00:10:20.331 23:52:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:20.331 23:52:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.331 23:52:04 json_config -- json_config/common.sh@41 -- # kill -0 216234 00:10:20.331 23:52:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:20.901 23:52:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:20.901 23:52:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.901 23:52:05 json_config -- json_config/common.sh@41 -- # kill -0 216234 00:10:20.901 23:52:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:20.901 23:52:05 json_config -- json_config/common.sh@43 -- # break 00:10:20.901 23:52:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:20.901 23:52:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:20.901 SPDK target shutdown done 00:10:20.901 23:52:05 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:20.901 INFO: relaunching applications... 00:10:20.901 23:52:05 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:20.901 23:52:05 json_config -- json_config/common.sh@9 -- # local app=target 00:10:20.901 23:52:05 json_config -- json_config/common.sh@10 -- # shift 00:10:20.901 23:52:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:20.901 23:52:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:20.901 23:52:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:20.901 23:52:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.901 23:52:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.901 23:52:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=217951 00:10:20.901 23:52:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:20.901 Waiting for target to run... 00:10:20.901 23:52:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:20.901 23:52:05 json_config -- json_config/common.sh@25 -- # waitforlisten 217951 /var/tmp/spdk_tgt.sock 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 217951 ']' 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:20.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.901 23:52:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.901 [2024-12-09 23:52:05.223372] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:20.901 [2024-12-09 23:52:05.223432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217951 ] 00:10:21.161 [2024-12-09 23:52:05.542276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.161 [2024-12-09 23:52:05.574371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.457 [2024-12-09 23:52:08.619257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.457 [2024-12-09 23:52:08.651620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:24.457 23:52:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.457 23:52:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:24.457 23:52:08 json_config -- json_config/common.sh@26 -- # echo '' 00:10:24.457 00:10:24.457 23:52:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:24.457 23:52:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:24.457 INFO: Checking if target configuration is the same... 00:10:24.457 23:52:08 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:24.457 23:52:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:24.457 23:52:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:24.457 + '[' 2 -ne 2 ']' 00:10:24.457 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:24.457 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:24.457 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.457 +++ basename /dev/fd/62 00:10:24.457 ++ mktemp /tmp/62.XXX 00:10:24.457 + tmp_file_1=/tmp/62.XdO 00:10:24.457 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:24.457 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:24.457 + tmp_file_2=/tmp/spdk_tgt_config.json.u7k 00:10:24.457 + ret=0 00:10:24.457 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:24.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:24.718 + diff -u /tmp/62.XdO /tmp/spdk_tgt_config.json.u7k 00:10:24.718 + echo 'INFO: JSON config files are the same' 00:10:24.718 INFO: JSON config files are the same 00:10:24.718 + rm /tmp/62.XdO /tmp/spdk_tgt_config.json.u7k 00:10:24.718 + exit 0 00:10:24.718 23:52:09 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:24.718 23:52:09 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:24.718 INFO: changing configuration and checking if this can be detected... 00:10:24.718 23:52:09 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:24.718 23:52:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:24.977 23:52:09 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:24.977 23:52:09 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:24.977 23:52:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:24.977 + '[' 2 -ne 2 ']' 00:10:24.977 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:24.977 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:24.977 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.977 +++ basename /dev/fd/62 00:10:24.977 ++ mktemp /tmp/62.XXX 00:10:24.977 + tmp_file_1=/tmp/62.tUh 00:10:24.977 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:24.977 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:24.977 + tmp_file_2=/tmp/spdk_tgt_config.json.qAd 00:10:24.977 + ret=0 00:10:24.977 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:25.237 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:25.237 + diff -u /tmp/62.tUh /tmp/spdk_tgt_config.json.qAd 00:10:25.237 + ret=1 00:10:25.237 + echo '=== Start of file: /tmp/62.tUh ===' 00:10:25.237 + cat /tmp/62.tUh 00:10:25.237 + echo '=== End of file: /tmp/62.tUh ===' 00:10:25.237 + echo '' 00:10:25.237 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qAd ===' 00:10:25.237 + cat /tmp/spdk_tgt_config.json.qAd 00:10:25.237 + echo '=== End of file: /tmp/spdk_tgt_config.json.qAd ===' 00:10:25.237 + echo '' 00:10:25.237 + rm /tmp/62.tUh /tmp/spdk_tgt_config.json.qAd 00:10:25.237 + exit 1 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:25.237 INFO: configuration change detected. 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 217951 ]] 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:25.237 23:52:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.237 23:52:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.497 23:52:09 json_config -- json_config/json_config.sh@330 -- # killprocess 217951 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 217951 ']' 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@958 -- # kill -0 217951 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@959 -- # uname 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217951 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217951' 00:10:25.497 killing process with pid 217951 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@973 -- # kill 217951 00:10:25.497 23:52:09 json_config -- common/autotest_common.sh@978 -- # wait 217951 00:10:27.406 23:52:11 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:27.406 23:52:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:27.406 23:52:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.406 23:52:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 23:52:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:27.666 23:52:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:27.666 INFO: Success 00:10:27.666 00:10:27.666 real 0m16.157s 00:10:27.666 user 0m16.556s 00:10:27.666 sys 0m2.592s 00:10:27.666 23:52:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.666 23:52:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.666 ************************************ 00:10:27.666 END TEST json_config 00:10:27.666 ************************************ 00:10:27.667 23:52:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:27.667 23:52:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.667 23:52:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.667 23:52:11 -- common/autotest_common.sh@10 -- # set +x 00:10:27.667 ************************************ 00:10:27.667 START TEST json_config_extra_key 00:10:27.667 ************************************ 00:10:27.667 23:52:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:27.667 23:52:12 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.667 23:52:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.667 23:52:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.667 23:52:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.667 23:52:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:27.926 23:52:12 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.926 23:52:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.926 --rc genhtml_branch_coverage=1 00:10:27.926 --rc genhtml_function_coverage=1 00:10:27.926 --rc genhtml_legend=1 00:10:27.926 --rc geninfo_all_blocks=1 00:10:27.926 --rc geninfo_unexecuted_blocks=1 00:10:27.926 00:10:27.926 ' 00:10:27.926 23:52:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.926 --rc genhtml_branch_coverage=1 00:10:27.926 --rc genhtml_function_coverage=1 00:10:27.926 --rc genhtml_legend=1 00:10:27.926 --rc geninfo_all_blocks=1 00:10:27.926 --rc geninfo_unexecuted_blocks=1 00:10:27.926 00:10:27.926 ' 00:10:27.926 23:52:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.926 --rc genhtml_branch_coverage=1 00:10:27.926 --rc genhtml_function_coverage=1 00:10:27.926 --rc genhtml_legend=1 00:10:27.926 --rc geninfo_all_blocks=1 00:10:27.926 --rc geninfo_unexecuted_blocks=1 00:10:27.926 00:10:27.926 ' 00:10:27.926 23:52:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.926 --rc genhtml_branch_coverage=1 00:10:27.926 --rc genhtml_function_coverage=1 00:10:27.926 --rc genhtml_legend=1 00:10:27.926 --rc geninfo_all_blocks=1 00:10:27.926 --rc geninfo_unexecuted_blocks=1 00:10:27.926 00:10:27.926 ' 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.926 23:52:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.926 23:52:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.926 23:52:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.926 23:52:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.926 23:52:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:27.926 23:52:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.926 23:52:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:27.926 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:27.927 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:27.927 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:27.927 INFO: launching applications... 00:10:27.927 23:52:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=219142 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:27.927 Waiting for target to run... 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 219142 /var/tmp/spdk_tgt.sock 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 219142 ']' 00:10:27.927 23:52:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:27.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.927 23:52:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:27.927 [2024-12-09 23:52:12.245388] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:27.927 [2024-12-09 23:52:12.245436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219142 ] 00:10:28.496 [2024-12-09 23:52:12.698007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.496 [2024-12-09 23:52:12.751548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.756 23:52:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.756 23:52:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:28.756 00:10:28.756 23:52:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:28.756 INFO: shutting down applications... 00:10:28.756 23:52:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 219142 ]] 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 219142 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 219142 00:10:28.756 23:52:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 219142 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:29.327 23:52:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:29.327 SPDK target shutdown done 00:10:29.327 23:52:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:29.327 Success 00:10:29.327 00:10:29.327 real 0m1.583s 00:10:29.327 user 0m1.159s 00:10:29.327 sys 0m0.595s 00:10:29.327 23:52:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.327 23:52:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:29.327 ************************************ 00:10:29.327 END TEST json_config_extra_key 00:10:29.327 ************************************ 00:10:29.327 23:52:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:29.327 23:52:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.327 23:52:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.327 23:52:13 -- common/autotest_common.sh@10 -- # set +x 00:10:29.327 ************************************ 00:10:29.327 START TEST alias_rpc 00:10:29.327 ************************************ 00:10:29.327 23:52:13 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:29.327 * Looking for test storage... 00:10:29.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:10:29.327 23:52:13 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.327 23:52:13 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.327 23:52:13 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.587 23:52:13 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.587 23:52:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.588 23:52:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.588 --rc genhtml_branch_coverage=1 00:10:29.588 --rc genhtml_function_coverage=1 00:10:29.588 --rc genhtml_legend=1 00:10:29.588 --rc geninfo_all_blocks=1 00:10:29.588 --rc geninfo_unexecuted_blocks=1 00:10:29.588 00:10:29.588 ' 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.588 --rc genhtml_branch_coverage=1 00:10:29.588 --rc genhtml_function_coverage=1 00:10:29.588 --rc genhtml_legend=1 00:10:29.588 --rc geninfo_all_blocks=1 00:10:29.588 --rc geninfo_unexecuted_blocks=1 00:10:29.588 00:10:29.588 ' 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.588 --rc genhtml_branch_coverage=1 00:10:29.588 --rc genhtml_function_coverage=1 00:10:29.588 --rc genhtml_legend=1 00:10:29.588 --rc geninfo_all_blocks=1 00:10:29.588 --rc geninfo_unexecuted_blocks=1 00:10:29.588 00:10:29.588 ' 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.588 --rc genhtml_branch_coverage=1 00:10:29.588 --rc genhtml_function_coverage=1 00:10:29.588 --rc genhtml_legend=1 00:10:29.588 --rc geninfo_all_blocks=1 00:10:29.588 --rc geninfo_unexecuted_blocks=1 00:10:29.588 00:10:29.588 ' 00:10:29.588 23:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:29.588 23:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=219474 00:10:29.588 23:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 219474 00:10:29.588 23:52:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 219474 ']' 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.588 23:52:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.588 [2024-12-09 23:52:13.912649] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:29.588 [2024-12-09 23:52:13.912700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219474 ] 00:10:29.588 [2024-12-09 23:52:14.001959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.588 [2024-12-09 23:52:14.041217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:30.528 23:52:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:30.528 23:52:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 219474 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 219474 ']' 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 219474 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.528 23:52:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219474 00:10:30.788 23:52:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.788 23:52:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.789 23:52:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219474' 00:10:30.789 killing process with pid 219474 00:10:30.789 23:52:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 219474 00:10:30.789 23:52:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 219474 00:10:31.049 00:10:31.049 real 0m1.669s 00:10:31.049 user 0m1.791s 00:10:31.049 sys 0m0.513s 00:10:31.049 23:52:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.049 23:52:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.049 ************************************ 00:10:31.049 END TEST alias_rpc 00:10:31.049 ************************************ 00:10:31.049 23:52:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:31.049 23:52:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:31.049 23:52:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.049 23:52:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.049 23:52:15 -- common/autotest_common.sh@10 -- # set +x 00:10:31.049 ************************************ 00:10:31.049 START TEST spdkcli_tcp 00:10:31.049 ************************************ 00:10:31.049 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:31.049 * Looking for test storage... 00:10:31.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.310 23:52:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.310 --rc genhtml_branch_coverage=1 00:10:31.310 --rc genhtml_function_coverage=1 00:10:31.310 --rc genhtml_legend=1 00:10:31.310 --rc geninfo_all_blocks=1 00:10:31.310 --rc geninfo_unexecuted_blocks=1 00:10:31.310 00:10:31.310 ' 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.310 --rc genhtml_branch_coverage=1 00:10:31.310 --rc genhtml_function_coverage=1 00:10:31.310 --rc genhtml_legend=1 00:10:31.310 --rc geninfo_all_blocks=1 00:10:31.310 --rc geninfo_unexecuted_blocks=1 00:10:31.310 00:10:31.310 ' 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.310 --rc genhtml_branch_coverage=1 00:10:31.310 --rc genhtml_function_coverage=1 00:10:31.310 --rc genhtml_legend=1 00:10:31.310 --rc geninfo_all_blocks=1 00:10:31.310 --rc geninfo_unexecuted_blocks=1 00:10:31.310 00:10:31.310 ' 00:10:31.310 23:52:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:31.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.310 --rc genhtml_branch_coverage=1 00:10:31.310 --rc genhtml_function_coverage=1 00:10:31.310 --rc genhtml_legend=1 00:10:31.310 --rc geninfo_all_blocks=1 00:10:31.310 --rc geninfo_unexecuted_blocks=1 00:10:31.310 00:10:31.310 ' 00:10:31.310 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:10:31.310 23:52:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=219905 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 219905 00:10:31.311 23:52:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 219905 ']' 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.311 23:52:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.311 [2024-12-09 23:52:15.678212] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:31.311 [2024-12-09 23:52:15.678267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219905 ] 00:10:31.311 [2024-12-09 23:52:15.771368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.571 [2024-12-09 23:52:15.812351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.571 [2024-12-09 23:52:15.812351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.140 23:52:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.140 23:52:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:32.140 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:32.140 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=220060 00:10:32.140 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:32.401 [ 00:10:32.401 "bdev_malloc_delete", 00:10:32.401 "bdev_malloc_create", 00:10:32.401 "bdev_null_resize", 00:10:32.401 "bdev_null_delete", 00:10:32.401 "bdev_null_create", 00:10:32.401 "bdev_nvme_cuse_unregister", 00:10:32.401 "bdev_nvme_cuse_register", 00:10:32.401 "bdev_opal_new_user", 00:10:32.401 "bdev_opal_set_lock_state", 00:10:32.401 "bdev_opal_delete", 00:10:32.401 "bdev_opal_get_info", 00:10:32.401 "bdev_opal_create", 00:10:32.401 "bdev_nvme_opal_revert", 00:10:32.401 "bdev_nvme_opal_init", 00:10:32.401 "bdev_nvme_send_cmd", 00:10:32.401 "bdev_nvme_set_keys", 00:10:32.401 "bdev_nvme_get_path_iostat", 00:10:32.401 "bdev_nvme_get_mdns_discovery_info", 00:10:32.401 "bdev_nvme_stop_mdns_discovery", 00:10:32.401 "bdev_nvme_start_mdns_discovery", 00:10:32.401 "bdev_nvme_set_multipath_policy", 00:10:32.401 "bdev_nvme_set_preferred_path", 00:10:32.401 "bdev_nvme_get_io_paths", 00:10:32.401 "bdev_nvme_remove_error_injection", 00:10:32.401 "bdev_nvme_add_error_injection", 00:10:32.401 "bdev_nvme_get_discovery_info", 00:10:32.401 "bdev_nvme_stop_discovery", 00:10:32.401 "bdev_nvme_start_discovery", 00:10:32.401 "bdev_nvme_get_controller_health_info", 00:10:32.401 "bdev_nvme_disable_controller", 00:10:32.401 "bdev_nvme_enable_controller", 00:10:32.401 "bdev_nvme_reset_controller", 00:10:32.401 "bdev_nvme_get_transport_statistics", 00:10:32.401 "bdev_nvme_apply_firmware", 00:10:32.401 "bdev_nvme_detach_controller", 00:10:32.401 "bdev_nvme_get_controllers", 00:10:32.401 "bdev_nvme_attach_controller", 00:10:32.401 "bdev_nvme_set_hotplug", 00:10:32.401 "bdev_nvme_set_options", 00:10:32.401 "bdev_passthru_delete", 00:10:32.401 "bdev_passthru_create", 00:10:32.401 "bdev_lvol_set_parent_bdev", 00:10:32.401 "bdev_lvol_set_parent", 00:10:32.401 "bdev_lvol_check_shallow_copy", 00:10:32.401 "bdev_lvol_start_shallow_copy", 00:10:32.401 "bdev_lvol_grow_lvstore", 00:10:32.401 "bdev_lvol_get_lvols", 00:10:32.401 "bdev_lvol_get_lvstores", 00:10:32.401 "bdev_lvol_delete", 00:10:32.401 "bdev_lvol_set_read_only", 00:10:32.401 "bdev_lvol_resize", 00:10:32.401 "bdev_lvol_decouple_parent", 00:10:32.401 "bdev_lvol_inflate", 00:10:32.401 "bdev_lvol_rename", 00:10:32.401 "bdev_lvol_clone_bdev", 00:10:32.401 "bdev_lvol_clone", 00:10:32.401 "bdev_lvol_snapshot", 00:10:32.401 "bdev_lvol_create", 00:10:32.401 "bdev_lvol_delete_lvstore", 00:10:32.401 "bdev_lvol_rename_lvstore", 00:10:32.401 "bdev_lvol_create_lvstore", 00:10:32.401 "bdev_raid_set_options", 00:10:32.401 "bdev_raid_remove_base_bdev", 00:10:32.401 "bdev_raid_add_base_bdev", 00:10:32.401 "bdev_raid_delete", 00:10:32.401 "bdev_raid_create", 00:10:32.401 "bdev_raid_get_bdevs", 00:10:32.401 "bdev_error_inject_error", 00:10:32.401 "bdev_error_delete", 00:10:32.401 "bdev_error_create", 00:10:32.401 "bdev_split_delete", 00:10:32.401 "bdev_split_create", 00:10:32.401 "bdev_delay_delete", 00:10:32.401 "bdev_delay_create", 00:10:32.401 "bdev_delay_update_latency", 00:10:32.401 "bdev_zone_block_delete", 00:10:32.401 "bdev_zone_block_create", 00:10:32.401 "blobfs_create", 00:10:32.401 "blobfs_detect", 00:10:32.401 "blobfs_set_cache_size", 00:10:32.401 "bdev_aio_delete", 00:10:32.401 "bdev_aio_rescan", 00:10:32.401 "bdev_aio_create", 00:10:32.401 "bdev_ftl_set_property", 00:10:32.401 "bdev_ftl_get_properties", 00:10:32.401 "bdev_ftl_get_stats", 00:10:32.401 "bdev_ftl_unmap", 00:10:32.401 "bdev_ftl_unload", 00:10:32.401 "bdev_ftl_delete", 00:10:32.401 "bdev_ftl_load", 00:10:32.401 "bdev_ftl_create", 00:10:32.401 "bdev_virtio_attach_controller", 00:10:32.402 "bdev_virtio_scsi_get_devices", 00:10:32.402 "bdev_virtio_detach_controller", 00:10:32.402 "bdev_virtio_blk_set_hotplug", 00:10:32.402 "bdev_iscsi_delete", 00:10:32.402 "bdev_iscsi_create", 00:10:32.402 "bdev_iscsi_set_options", 00:10:32.402 "accel_error_inject_error", 00:10:32.402 "ioat_scan_accel_module", 00:10:32.402 "dsa_scan_accel_module", 00:10:32.402 "iaa_scan_accel_module", 00:10:32.402 "vfu_virtio_create_fs_endpoint", 00:10:32.402 "vfu_virtio_create_scsi_endpoint", 00:10:32.402 "vfu_virtio_scsi_remove_target", 00:10:32.402 "vfu_virtio_scsi_add_target", 00:10:32.402 "vfu_virtio_create_blk_endpoint", 00:10:32.402 "vfu_virtio_delete_endpoint", 00:10:32.402 "keyring_file_remove_key", 00:10:32.402 "keyring_file_add_key", 00:10:32.402 "keyring_linux_set_options", 00:10:32.402 "fsdev_aio_delete", 00:10:32.402 "fsdev_aio_create", 00:10:32.402 "iscsi_get_histogram", 00:10:32.402 "iscsi_enable_histogram", 00:10:32.402 "iscsi_set_options", 00:10:32.402 "iscsi_get_auth_groups", 00:10:32.402 "iscsi_auth_group_remove_secret", 00:10:32.402 "iscsi_auth_group_add_secret", 00:10:32.402 "iscsi_delete_auth_group", 00:10:32.402 "iscsi_create_auth_group", 00:10:32.402 "iscsi_set_discovery_auth", 00:10:32.402 "iscsi_get_options", 00:10:32.402 "iscsi_target_node_request_logout", 00:10:32.402 "iscsi_target_node_set_redirect", 00:10:32.402 "iscsi_target_node_set_auth", 00:10:32.402 "iscsi_target_node_add_lun", 00:10:32.402 "iscsi_get_stats", 00:10:32.402 "iscsi_get_connections", 00:10:32.402 "iscsi_portal_group_set_auth", 00:10:32.402 "iscsi_start_portal_group", 00:10:32.402 "iscsi_delete_portal_group", 00:10:32.402 "iscsi_create_portal_group", 00:10:32.402 "iscsi_get_portal_groups", 00:10:32.402 "iscsi_delete_target_node", 00:10:32.402 "iscsi_target_node_remove_pg_ig_maps", 00:10:32.402 "iscsi_target_node_add_pg_ig_maps", 00:10:32.402 "iscsi_create_target_node", 00:10:32.402 "iscsi_get_target_nodes", 00:10:32.402 "iscsi_delete_initiator_group", 00:10:32.402 "iscsi_initiator_group_remove_initiators", 00:10:32.402 "iscsi_initiator_group_add_initiators", 00:10:32.402 "iscsi_create_initiator_group", 00:10:32.402 "iscsi_get_initiator_groups", 00:10:32.402 "nvmf_set_crdt", 00:10:32.402 "nvmf_set_config", 00:10:32.402 "nvmf_set_max_subsystems", 00:10:32.402 "nvmf_stop_mdns_prr", 00:10:32.402 "nvmf_publish_mdns_prr", 00:10:32.402 "nvmf_subsystem_get_listeners", 00:10:32.402 "nvmf_subsystem_get_qpairs", 00:10:32.402 "nvmf_subsystem_get_controllers", 00:10:32.402 "nvmf_get_stats", 00:10:32.402 "nvmf_get_transports", 00:10:32.402 "nvmf_create_transport", 00:10:32.402 "nvmf_get_targets", 00:10:32.402 "nvmf_delete_target", 00:10:32.402 "nvmf_create_target", 00:10:32.402 "nvmf_subsystem_allow_any_host", 00:10:32.402 "nvmf_subsystem_set_keys", 00:10:32.402 "nvmf_subsystem_remove_host", 00:10:32.402 "nvmf_subsystem_add_host", 00:10:32.402 "nvmf_ns_remove_host", 00:10:32.402 "nvmf_ns_add_host", 00:10:32.402 "nvmf_subsystem_remove_ns", 00:10:32.402 "nvmf_subsystem_set_ns_ana_group", 00:10:32.402 "nvmf_subsystem_add_ns", 00:10:32.402 "nvmf_subsystem_listener_set_ana_state", 00:10:32.402 "nvmf_discovery_get_referrals", 00:10:32.402 "nvmf_discovery_remove_referral", 00:10:32.402 "nvmf_discovery_add_referral", 00:10:32.402 "nvmf_subsystem_remove_listener", 00:10:32.402 "nvmf_subsystem_add_listener", 00:10:32.402 "nvmf_delete_subsystem", 00:10:32.402 "nvmf_create_subsystem", 00:10:32.402 "nvmf_get_subsystems", 00:10:32.402 "env_dpdk_get_mem_stats", 00:10:32.402 "nbd_get_disks", 00:10:32.402 "nbd_stop_disk", 00:10:32.402 "nbd_start_disk", 00:10:32.402 "ublk_recover_disk", 00:10:32.402 "ublk_get_disks", 00:10:32.402 "ublk_stop_disk", 00:10:32.402 "ublk_start_disk", 00:10:32.402 "ublk_destroy_target", 00:10:32.402 "ublk_create_target", 00:10:32.402 "virtio_blk_create_transport", 00:10:32.402 "virtio_blk_get_transports", 00:10:32.402 "vhost_controller_set_coalescing", 00:10:32.402 "vhost_get_controllers", 00:10:32.402 "vhost_delete_controller", 00:10:32.402 "vhost_create_blk_controller", 00:10:32.402 "vhost_scsi_controller_remove_target", 00:10:32.402 "vhost_scsi_controller_add_target", 00:10:32.402 "vhost_start_scsi_controller", 00:10:32.402 "vhost_create_scsi_controller", 00:10:32.402 "thread_set_cpumask", 00:10:32.402 "scheduler_set_options", 00:10:32.402 "framework_get_governor", 00:10:32.402 "framework_get_scheduler", 00:10:32.402 "framework_set_scheduler", 00:10:32.402 "framework_get_reactors", 00:10:32.402 "thread_get_io_channels", 00:10:32.402 "thread_get_pollers", 00:10:32.402 "thread_get_stats", 00:10:32.402 "framework_monitor_context_switch", 00:10:32.402 "spdk_kill_instance", 00:10:32.402 "log_enable_timestamps", 00:10:32.402 "log_get_flags", 00:10:32.402 "log_clear_flag", 00:10:32.402 "log_set_flag", 00:10:32.402 "log_get_level", 00:10:32.402 "log_set_level", 00:10:32.402 "log_get_print_level", 00:10:32.402 "log_set_print_level", 00:10:32.402 "framework_enable_cpumask_locks", 00:10:32.402 "framework_disable_cpumask_locks", 00:10:32.402 "framework_wait_init", 00:10:32.402 "framework_start_init", 00:10:32.402 "scsi_get_devices", 00:10:32.402 "bdev_get_histogram", 00:10:32.402 "bdev_enable_histogram", 00:10:32.402 "bdev_set_qos_limit", 00:10:32.402 "bdev_set_qd_sampling_period", 00:10:32.402 "bdev_get_bdevs", 00:10:32.402 "bdev_reset_iostat", 00:10:32.402 "bdev_get_iostat", 00:10:32.402 "bdev_examine", 00:10:32.402 "bdev_wait_for_examine", 00:10:32.402 "bdev_set_options", 00:10:32.402 "accel_get_stats", 00:10:32.402 "accel_set_options", 00:10:32.402 "accel_set_driver", 00:10:32.402 "accel_crypto_key_destroy", 00:10:32.402 "accel_crypto_keys_get", 00:10:32.402 "accel_crypto_key_create", 00:10:32.402 "accel_assign_opc", 00:10:32.402 "accel_get_module_info", 00:10:32.402 "accel_get_opc_assignments", 00:10:32.402 "vmd_rescan", 00:10:32.402 "vmd_remove_device", 00:10:32.402 "vmd_enable", 00:10:32.402 "sock_get_default_impl", 00:10:32.402 "sock_set_default_impl", 00:10:32.402 "sock_impl_set_options", 00:10:32.402 "sock_impl_get_options", 00:10:32.402 "iobuf_get_stats", 00:10:32.402 "iobuf_set_options", 00:10:32.402 "keyring_get_keys", 00:10:32.402 "vfu_tgt_set_base_path", 00:10:32.402 "framework_get_pci_devices", 00:10:32.402 "framework_get_config", 00:10:32.402 "framework_get_subsystems", 00:10:32.402 "fsdev_set_opts", 00:10:32.402 "fsdev_get_opts", 00:10:32.402 "trace_get_info", 00:10:32.402 "trace_get_tpoint_group_mask", 00:10:32.402 "trace_disable_tpoint_group", 00:10:32.402 "trace_enable_tpoint_group", 00:10:32.402 "trace_clear_tpoint_mask", 00:10:32.402 "trace_set_tpoint_mask", 00:10:32.402 "notify_get_notifications", 00:10:32.402 "notify_get_types", 00:10:32.402 "spdk_get_version", 00:10:32.402 "rpc_get_methods" 00:10:32.402 ] 00:10:32.402 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.402 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:32.402 23:52:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 219905 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 219905 ']' 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 219905 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219905 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219905' 00:10:32.402 killing process with pid 219905 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 219905 00:10:32.402 23:52:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 219905 00:10:32.663 00:10:32.663 real 0m1.700s 00:10:32.663 user 0m3.073s 00:10:32.663 sys 0m0.569s 00:10:32.663 23:52:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.663 23:52:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.663 ************************************ 00:10:32.663 END TEST spdkcli_tcp 00:10:32.663 ************************************ 00:10:32.922 23:52:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.922 23:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.922 23:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.922 23:52:17 -- common/autotest_common.sh@10 -- # set +x 00:10:32.922 ************************************ 00:10:32.922 START TEST dpdk_mem_utility 00:10:32.922 ************************************ 00:10:32.922 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.922 * Looking for test storage... 00:10:32.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:10:32.922 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.922 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.922 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.922 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:32.922 23:52:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.183 23:52:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.183 --rc genhtml_branch_coverage=1 00:10:33.183 --rc genhtml_function_coverage=1 00:10:33.183 --rc genhtml_legend=1 00:10:33.183 --rc geninfo_all_blocks=1 00:10:33.183 --rc geninfo_unexecuted_blocks=1 00:10:33.183 00:10:33.183 ' 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.183 --rc genhtml_branch_coverage=1 00:10:33.183 --rc genhtml_function_coverage=1 00:10:33.183 --rc genhtml_legend=1 00:10:33.183 --rc geninfo_all_blocks=1 00:10:33.183 --rc geninfo_unexecuted_blocks=1 00:10:33.183 00:10:33.183 ' 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.183 --rc genhtml_branch_coverage=1 00:10:33.183 --rc genhtml_function_coverage=1 00:10:33.183 --rc genhtml_legend=1 00:10:33.183 --rc geninfo_all_blocks=1 00:10:33.183 --rc geninfo_unexecuted_blocks=1 00:10:33.183 00:10:33.183 ' 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.183 --rc genhtml_branch_coverage=1 00:10:33.183 --rc genhtml_function_coverage=1 00:10:33.183 --rc genhtml_legend=1 00:10:33.183 --rc geninfo_all_blocks=1 00:10:33.183 --rc geninfo_unexecuted_blocks=1 00:10:33.183 00:10:33.183 ' 00:10:33.183 23:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:33.183 23:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=220362 00:10:33.183 23:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:33.183 23:52:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 220362 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 220362 ']' 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.183 23:52:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:33.183 [2024-12-09 23:52:17.458588] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:33.183 [2024-12-09 23:52:17.458641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220362 ] 00:10:33.183 [2024-12-09 23:52:17.547654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.183 [2024-12-09 23:52:17.585961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.124 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.124 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:34.124 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:34.124 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:34.124 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.124 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:34.124 { 00:10:34.124 "filename": "/tmp/spdk_mem_dump.txt" 00:10:34.124 } 00:10:34.124 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.124 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:34.124 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:34.124 1 heaps totaling size 818.000000 MiB 00:10:34.124 size: 818.000000 MiB heap id: 0 00:10:34.124 end heaps---------- 00:10:34.124 9 mempools totaling size 603.782043 MiB 00:10:34.124 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:34.124 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:34.124 size: 100.555481 MiB name: bdev_io_220362 00:10:34.124 size: 50.003479 MiB name: msgpool_220362 00:10:34.124 size: 36.509338 MiB name: fsdev_io_220362 00:10:34.124 size: 21.763794 MiB name: PDU_Pool 00:10:34.124 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:34.124 size: 4.133484 MiB name: evtpool_220362 00:10:34.124 size: 0.026123 MiB name: Session_Pool 00:10:34.124 end mempools------- 00:10:34.124 6 memzones totaling size 4.142822 MiB 00:10:34.124 size: 1.000366 MiB name: RG_ring_0_220362 00:10:34.124 size: 1.000366 MiB name: RG_ring_1_220362 00:10:34.124 size: 1.000366 MiB name: RG_ring_4_220362 00:10:34.124 size: 1.000366 MiB name: RG_ring_5_220362 00:10:34.124 size: 0.125366 MiB name: RG_ring_2_220362 00:10:34.124 size: 0.015991 MiB name: RG_ring_3_220362 00:10:34.124 end memzones------- 00:10:34.124 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:34.124 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:34.124 list of free elements. size: 10.852478 MiB 00:10:34.124 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:34.124 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:34.124 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:34.124 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:34.124 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:34.124 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:34.124 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:34.124 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:34.124 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:34.124 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:34.124 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:34.124 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:34.124 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:34.124 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:34.124 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:34.124 list of standard malloc elements. size: 199.218628 MiB 00:10:34.124 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:34.124 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:34.124 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:34.124 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:34.124 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:34.124 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:34.124 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:34.124 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:34.124 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:34.124 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:34.124 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:34.124 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:34.124 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:34.124 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:34.124 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:34.124 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:34.125 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:34.125 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:34.125 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:34.125 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:34.125 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:34.125 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:34.125 list of memzone associated elements. size: 607.928894 MiB 00:10:34.125 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:34.125 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:34.125 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:34.125 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:34.125 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:34.125 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_220362_0 00:10:34.125 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:34.125 associated memzone info: size: 48.002930 MiB name: MP_msgpool_220362_0 00:10:34.125 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:34.125 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_220362_0 00:10:34.125 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:34.125 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:34.125 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:34.125 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:34.125 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:34.125 associated memzone info: size: 3.000122 MiB name: MP_evtpool_220362_0 00:10:34.125 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:34.125 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_220362 00:10:34.125 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:34.125 associated memzone info: size: 1.007996 MiB name: MP_evtpool_220362 00:10:34.125 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:34.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:34.125 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:34.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:34.125 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:34.125 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:34.125 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:34.125 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:34.125 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:34.125 associated memzone info: size: 1.000366 MiB name: RG_ring_0_220362 00:10:34.125 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:34.125 associated memzone info: size: 1.000366 MiB name: RG_ring_1_220362 00:10:34.125 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:34.125 associated memzone info: size: 1.000366 MiB name: RG_ring_4_220362 00:10:34.125 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:34.125 associated memzone info: size: 1.000366 MiB name: RG_ring_5_220362 00:10:34.125 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:34.125 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_220362 00:10:34.125 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:34.125 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_220362 00:10:34.125 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:34.125 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:34.125 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:34.125 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:34.125 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:34.125 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:34.125 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:34.125 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_220362 00:10:34.125 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:34.125 associated memzone info: size: 0.125366 MiB name: RG_ring_2_220362 00:10:34.125 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:34.125 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:34.125 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:34.125 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:34.125 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:34.125 associated memzone info: size: 0.015991 MiB name: RG_ring_3_220362 00:10:34.125 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:34.125 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:34.125 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:34.125 associated memzone info: size: 0.000183 MiB name: MP_msgpool_220362 00:10:34.125 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:34.125 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_220362 00:10:34.125 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:34.125 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_220362 00:10:34.125 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:34.125 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:34.125 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:34.125 23:52:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 220362 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 220362 ']' 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 220362 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220362 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220362' 00:10:34.125 killing process with pid 220362 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 220362 00:10:34.125 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 220362 00:10:34.385 00:10:34.385 real 0m1.569s 00:10:34.385 user 0m1.573s 00:10:34.385 sys 0m0.536s 00:10:34.385 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.385 23:52:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:34.385 ************************************ 00:10:34.385 END TEST dpdk_mem_utility 00:10:34.385 ************************************ 00:10:34.385 23:52:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:34.385 23:52:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.385 23:52:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.385 23:52:18 -- common/autotest_common.sh@10 -- # set +x 00:10:34.385 ************************************ 00:10:34.385 START TEST event 00:10:34.385 ************************************ 00:10:34.385 23:52:18 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:34.645 * Looking for test storage... 00:10:34.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:34.645 23:52:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.645 23:52:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.645 23:52:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.645 23:52:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.645 23:52:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.645 23:52:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.645 23:52:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.645 23:52:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.645 23:52:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.645 23:52:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.645 23:52:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.645 23:52:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.645 23:52:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.645 23:52:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.645 23:52:19 event -- scripts/common.sh@344 -- # case "$op" in 00:10:34.645 23:52:19 event -- scripts/common.sh@345 -- # : 1 00:10:34.645 23:52:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.645 23:52:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.645 23:52:19 event -- scripts/common.sh@365 -- # decimal 1 00:10:34.645 23:52:19 event -- scripts/common.sh@353 -- # local d=1 00:10:34.645 23:52:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.645 23:52:19 event -- scripts/common.sh@355 -- # echo 1 00:10:34.645 23:52:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.645 23:52:19 event -- scripts/common.sh@366 -- # decimal 2 00:10:34.645 23:52:19 event -- scripts/common.sh@353 -- # local d=2 00:10:34.645 23:52:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.645 23:52:19 event -- scripts/common.sh@355 -- # echo 2 00:10:34.645 23:52:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.645 23:52:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.645 23:52:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.645 23:52:19 event -- scripts/common.sh@368 -- # return 0 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.645 --rc genhtml_branch_coverage=1 00:10:34.645 --rc genhtml_function_coverage=1 00:10:34.645 --rc genhtml_legend=1 00:10:34.645 --rc geninfo_all_blocks=1 00:10:34.645 --rc geninfo_unexecuted_blocks=1 00:10:34.645 00:10:34.645 ' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.645 --rc genhtml_branch_coverage=1 00:10:34.645 --rc genhtml_function_coverage=1 00:10:34.645 --rc genhtml_legend=1 00:10:34.645 --rc geninfo_all_blocks=1 00:10:34.645 --rc geninfo_unexecuted_blocks=1 00:10:34.645 00:10:34.645 ' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.645 --rc genhtml_branch_coverage=1 00:10:34.645 --rc genhtml_function_coverage=1 00:10:34.645 --rc genhtml_legend=1 00:10:34.645 --rc geninfo_all_blocks=1 00:10:34.645 --rc geninfo_unexecuted_blocks=1 00:10:34.645 00:10:34.645 ' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.645 --rc genhtml_branch_coverage=1 00:10:34.645 --rc genhtml_function_coverage=1 00:10:34.645 --rc genhtml_legend=1 00:10:34.645 --rc geninfo_all_blocks=1 00:10:34.645 --rc geninfo_unexecuted_blocks=1 00:10:34.645 00:10:34.645 ' 00:10:34.645 23:52:19 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:34.645 23:52:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:34.645 23:52:19 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:34.645 23:52:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.645 23:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:10:34.645 ************************************ 00:10:34.645 START TEST event_perf 00:10:34.645 ************************************ 00:10:34.645 23:52:19 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:34.645 Running I/O for 1 seconds...[2024-12-09 23:52:19.102599] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:34.645 [2024-12-09 23:52:19.102663] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220725 ] 00:10:34.906 [2024-12-09 23:52:19.194757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.906 [2024-12-09 23:52:19.236992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.906 [2024-12-09 23:52:19.237103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.906 [2024-12-09 23:52:19.237210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.906 [2024-12-09 23:52:19.237211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.846 Running I/O for 1 seconds... 00:10:35.846 lcore 0: 207520 00:10:35.846 lcore 1: 207521 00:10:35.846 lcore 2: 207520 00:10:35.846 lcore 3: 207518 00:10:35.846 done. 00:10:35.846 00:10:35.846 real 0m1.199s 00:10:35.846 user 0m4.114s 00:10:35.846 sys 0m0.083s 00:10:35.846 23:52:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.846 23:52:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:35.846 ************************************ 00:10:35.846 END TEST event_perf 00:10:35.846 ************************************ 00:10:35.846 23:52:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:36.106 23:52:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.106 23:52:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.106 23:52:20 event -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 ************************************ 00:10:36.106 START TEST event_reactor 00:10:36.106 ************************************ 00:10:36.106 23:52:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:36.106 [2024-12-09 23:52:20.382612] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:36.106 [2024-12-09 23:52:20.382693] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220920 ] 00:10:36.106 [2024-12-09 23:52:20.479989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.106 [2024-12-09 23:52:20.522866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.487 test_start 00:10:37.487 oneshot 00:10:37.487 tick 100 00:10:37.487 tick 100 00:10:37.487 tick 250 00:10:37.487 tick 100 00:10:37.487 tick 100 00:10:37.487 tick 100 00:10:37.487 tick 250 00:10:37.487 tick 500 00:10:37.487 tick 100 00:10:37.487 tick 100 00:10:37.487 tick 250 00:10:37.487 tick 100 00:10:37.487 tick 100 00:10:37.487 test_end 00:10:37.487 00:10:37.487 real 0m1.203s 00:10:37.487 user 0m1.103s 00:10:37.487 sys 0m0.096s 00:10:37.487 23:52:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.487 23:52:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:37.487 ************************************ 00:10:37.487 END TEST event_reactor 00:10:37.487 ************************************ 00:10:37.487 23:52:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:37.487 23:52:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.487 23:52:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.488 23:52:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.488 ************************************ 00:10:37.488 START TEST event_reactor_perf 00:10:37.488 ************************************ 00:10:37.488 23:52:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:37.488 [2024-12-09 23:52:21.669509] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:37.488 [2024-12-09 23:52:21.669592] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221073 ] 00:10:37.488 [2024-12-09 23:52:21.763599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.488 [2024-12-09 23:52:21.805609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.429 test_start 00:10:38.429 test_end 00:10:38.429 Performance: 526780 events per second 00:10:38.429 00:10:38.429 real 0m1.197s 00:10:38.429 user 0m1.110s 00:10:38.429 sys 0m0.083s 00:10:38.429 23:52:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.429 23:52:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:38.429 ************************************ 00:10:38.429 END TEST event_reactor_perf 00:10:38.429 ************************************ 00:10:38.429 23:52:22 event -- event/event.sh@49 -- # uname -s 00:10:38.429 23:52:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:38.429 23:52:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:38.429 23:52:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.429 23:52:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.429 23:52:22 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.689 ************************************ 00:10:38.689 START TEST event_scheduler 00:10:38.689 ************************************ 00:10:38.689 23:52:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:38.689 * Looking for test storage... 00:10:38.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:10:38.689 23:52:23 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.689 23:52:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.689 23:52:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.689 23:52:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.690 23:52:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 23:52:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:38.690 23:52:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:38.690 23:52:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=221374 00:10:38.690 23:52:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:38.690 23:52:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 221374 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 221374 ']' 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.690 23:52:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:38.950 [2024-12-09 23:52:23.172190] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:38.950 [2024-12-09 23:52:23.172238] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221374 ] 00:10:38.950 [2024-12-09 23:52:23.264292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.950 [2024-12-09 23:52:23.308236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.950 [2024-12-09 23:52:23.308271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.950 [2024-12-09 23:52:23.308379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.950 [2024-12-09 23:52:23.308378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:39.890 23:52:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 [2024-12-09 23:52:24.042997] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:39.890 [2024-12-09 23:52:24.043018] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:39.890 [2024-12-09 23:52:24.043030] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:39.890 [2024-12-09 23:52:24.043037] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:39.890 [2024-12-09 23:52:24.043044] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 [2024-12-09 23:52:24.118190] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 ************************************ 00:10:39.890 START TEST scheduler_create_thread 00:10:39.890 ************************************ 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 2 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 3 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 4 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 5 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 6 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 7 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 8 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 9 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 10 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.890 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:40.460 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.460 23:52:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:40.460 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.460 23:52:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.841 23:52:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.841 23:52:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:41.841 23:52:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:41.841 23:52:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.841 23:52:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.224 23:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.224 00:10:43.224 real 0m3.102s 00:10:43.224 user 0m0.024s 00:10:43.224 sys 0m0.007s 00:10:43.224 23:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.224 23:52:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.224 ************************************ 00:10:43.224 END TEST scheduler_create_thread 00:10:43.224 ************************************ 00:10:43.224 23:52:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:43.224 23:52:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 221374 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 221374 ']' 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 221374 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221374 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221374' 00:10:43.224 killing process with pid 221374 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 221374 00:10:43.224 23:52:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 221374 00:10:43.224 [2024-12-09 23:52:27.637363] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:43.484 00:10:43.484 real 0m4.891s 00:10:43.484 user 0m9.585s 00:10:43.484 sys 0m0.482s 00:10:43.484 23:52:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.484 23:52:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.484 ************************************ 00:10:43.484 END TEST event_scheduler 00:10:43.484 ************************************ 00:10:43.484 23:52:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:43.484 23:52:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:43.484 23:52:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.484 23:52:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.484 23:52:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:43.484 ************************************ 00:10:43.484 START TEST app_repeat 00:10:43.484 ************************************ 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=222297 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 222297' 00:10:43.484 Process app_repeat pid: 222297 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:43.484 spdk_app_start Round 0 00:10:43.484 23:52:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 222297 /var/tmp/spdk-nbd.sock 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 222297 ']' 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:43.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.484 23:52:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:43.484 [2024-12-09 23:52:27.945500] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:10:43.484 [2024-12-09 23:52:27.945568] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222297 ] 00:10:43.743 [2024-12-09 23:52:28.038678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.743 [2024-12-09 23:52:28.080366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.743 [2024-12-09 23:52:28.080368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.743 23:52:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.743 23:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:43.744 23:52:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.008 Malloc0 00:10:44.008 23:52:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.268 Malloc1 00:10:44.268 23:52:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.268 23:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:44.527 /dev/nbd0 00:10:44.527 23:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:44.527 23:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:44.527 1+0 records in 00:10:44.527 1+0 records out 00:10:44.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257903 s, 15.9 MB/s 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:44.527 23:52:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:44.527 23:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.527 23:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.527 23:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:44.786 /dev/nbd1 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:44.786 1+0 records in 00:10:44.786 1+0 records out 00:10:44.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266896 s, 15.3 MB/s 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:44.786 23:52:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.786 23:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:45.047 { 00:10:45.047 "nbd_device": "/dev/nbd0", 00:10:45.047 "bdev_name": "Malloc0" 00:10:45.047 }, 00:10:45.047 { 00:10:45.047 "nbd_device": "/dev/nbd1", 00:10:45.047 "bdev_name": "Malloc1" 00:10:45.047 } 00:10:45.047 ]' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:45.047 { 00:10:45.047 "nbd_device": "/dev/nbd0", 00:10:45.047 "bdev_name": "Malloc0" 00:10:45.047 }, 00:10:45.047 { 00:10:45.047 "nbd_device": "/dev/nbd1", 00:10:45.047 "bdev_name": "Malloc1" 00:10:45.047 } 00:10:45.047 ]' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:45.047 /dev/nbd1' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:45.047 /dev/nbd1' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:45.047 256+0 records in 00:10:45.047 256+0 records out 00:10:45.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111352 s, 94.2 MB/s 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:45.047 256+0 records in 00:10:45.047 256+0 records out 00:10:45.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189864 s, 55.2 MB/s 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:45.047 256+0 records in 00:10:45.047 256+0 records out 00:10:45.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200886 s, 52.2 MB/s 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.047 23:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.307 23:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.308 23:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.308 23:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.568 23:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:45.827 23:52:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:45.827 23:52:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:46.087 23:52:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:46.087 [2024-12-09 23:52:30.498170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.087 [2024-12-09 23:52:30.532876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.087 [2024-12-09 23:52:30.532876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.347 [2024-12-09 23:52:30.572779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:46.347 [2024-12-09 23:52:30.572815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:48.885 23:52:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:48.885 23:52:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:48.885 spdk_app_start Round 1 00:10:48.885 23:52:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 222297 /var/tmp/spdk-nbd.sock 00:10:48.885 23:52:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 222297 ']' 00:10:48.885 23:52:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:48.885 23:52:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.885 23:52:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:48.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:48.885 23:52:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.145 23:52:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 23:52:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.145 23:52:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:49.145 23:52:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:49.405 Malloc0 00:10:49.405 23:52:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:49.665 Malloc1 00:10:49.665 23:52:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.665 23:52:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:49.925 /dev/nbd0 00:10:49.925 23:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:49.925 23:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:49.925 1+0 records in 00:10:49.925 1+0 records out 00:10:49.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259998 s, 15.8 MB/s 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:49.925 23:52:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:49.925 23:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:49.925 23:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:49.925 23:52:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:50.185 /dev/nbd1 00:10:50.185 23:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:50.185 23:52:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.185 1+0 records in 00:10:50.185 1+0 records out 00:10:50.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245526 s, 16.7 MB/s 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:50.185 23:52:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:50.186 23:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.186 23:52:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.186 23:52:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.186 23:52:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.186 23:52:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:50.446 { 00:10:50.446 "nbd_device": "/dev/nbd0", 00:10:50.446 "bdev_name": "Malloc0" 00:10:50.446 }, 00:10:50.446 { 00:10:50.446 "nbd_device": "/dev/nbd1", 00:10:50.446 "bdev_name": "Malloc1" 00:10:50.446 } 00:10:50.446 ]' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:50.446 { 00:10:50.446 "nbd_device": "/dev/nbd0", 00:10:50.446 "bdev_name": "Malloc0" 00:10:50.446 }, 00:10:50.446 { 00:10:50.446 "nbd_device": "/dev/nbd1", 00:10:50.446 "bdev_name": "Malloc1" 00:10:50.446 } 00:10:50.446 ]' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:50.446 /dev/nbd1' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:50.446 /dev/nbd1' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:50.446 256+0 records in 00:10:50.446 256+0 records out 00:10:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105609 s, 99.3 MB/s 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:50.446 256+0 records in 00:10:50.446 256+0 records out 00:10:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192278 s, 54.5 MB/s 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:50.446 256+0 records in 00:10:50.446 256+0 records out 00:10:50.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203422 s, 51.5 MB/s 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.446 23:52:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.706 23:52:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.966 23:52:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:51.226 23:52:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:51.226 23:52:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:51.486 23:52:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:51.487 [2024-12-09 23:52:35.849776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.487 [2024-12-09 23:52:35.885093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.487 [2024-12-09 23:52:35.885093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.487 [2024-12-09 23:52:35.926663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:51.487 [2024-12-09 23:52:35.926704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:54.781 23:52:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:54.781 23:52:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:54.781 spdk_app_start Round 2 00:10:54.781 23:52:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 222297 /var/tmp/spdk-nbd.sock 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 222297 ']' 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:54.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.781 23:52:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:54.781 23:52:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:54.781 Malloc0 00:10:54.781 23:52:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:55.041 Malloc1 00:10:55.041 23:52:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.041 23:52:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:55.301 /dev/nbd0 00:10:55.301 23:52:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:55.301 23:52:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:55.301 1+0 records in 00:10:55.301 1+0 records out 00:10:55.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254487 s, 16.1 MB/s 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:55.301 23:52:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:55.301 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.301 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.301 23:52:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:55.301 /dev/nbd1 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:55.562 1+0 records in 00:10:55.562 1+0 records out 00:10:55.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283158 s, 14.5 MB/s 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:55.562 23:52:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:55.562 23:52:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.563 23:52:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:55.563 23:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:55.563 { 00:10:55.563 "nbd_device": "/dev/nbd0", 00:10:55.563 "bdev_name": "Malloc0" 00:10:55.563 }, 00:10:55.563 { 00:10:55.563 "nbd_device": "/dev/nbd1", 00:10:55.563 "bdev_name": "Malloc1" 00:10:55.563 } 00:10:55.563 ]' 00:10:55.563 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:55.563 { 00:10:55.563 "nbd_device": "/dev/nbd0", 00:10:55.563 "bdev_name": "Malloc0" 00:10:55.563 }, 00:10:55.563 { 00:10:55.563 "nbd_device": "/dev/nbd1", 00:10:55.563 "bdev_name": "Malloc1" 00:10:55.563 } 00:10:55.563 ]' 00:10:55.563 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:55.823 /dev/nbd1' 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:55.823 /dev/nbd1' 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:55.823 256+0 records in 00:10:55.823 256+0 records out 00:10:55.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104391 s, 100 MB/s 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.823 23:52:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:55.823 256+0 records in 00:10:55.823 256+0 records out 00:10:55.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196066 s, 53.5 MB/s 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:55.824 256+0 records in 00:10:55.824 256+0 records out 00:10:55.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204658 s, 51.2 MB/s 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.824 23:52:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.084 23:52:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:56.344 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:56.604 23:52:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:56.604 23:52:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:56.604 23:52:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:56.866 [2024-12-09 23:52:41.195100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:56.866 [2024-12-09 23:52:41.230446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.866 [2024-12-09 23:52:41.230445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.866 [2024-12-09 23:52:41.271062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:56.866 [2024-12-09 23:52:41.271103] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:00.162 23:52:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 222297 /var/tmp/spdk-nbd.sock 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 222297 ']' 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:00.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:00.162 23:52:44 event.app_repeat -- event/event.sh@39 -- # killprocess 222297 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 222297 ']' 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 222297 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.162 23:52:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222297 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222297' 00:11:00.163 killing process with pid 222297 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 222297 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 222297 00:11:00.163 spdk_app_start is called in Round 0. 00:11:00.163 Shutdown signal received, stop current app iteration 00:11:00.163 Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 reinitialization... 00:11:00.163 spdk_app_start is called in Round 1. 00:11:00.163 Shutdown signal received, stop current app iteration 00:11:00.163 Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 reinitialization... 00:11:00.163 spdk_app_start is called in Round 2. 00:11:00.163 Shutdown signal received, stop current app iteration 00:11:00.163 Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 reinitialization... 00:11:00.163 spdk_app_start is called in Round 3. 00:11:00.163 Shutdown signal received, stop current app iteration 00:11:00.163 23:52:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:00.163 23:52:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:00.163 00:11:00.163 real 0m16.549s 00:11:00.163 user 0m35.881s 00:11:00.163 sys 0m3.074s 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.163 23:52:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:00.163 ************************************ 00:11:00.163 END TEST app_repeat 00:11:00.163 ************************************ 00:11:00.163 23:52:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:00.163 23:52:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:00.163 23:52:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.163 23:52:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.163 23:52:44 event -- common/autotest_common.sh@10 -- # set +x 00:11:00.163 ************************************ 00:11:00.163 START TEST cpu_locks 00:11:00.163 ************************************ 00:11:00.163 23:52:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:00.423 * Looking for test storage... 00:11:00.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.423 23:52:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.423 --rc genhtml_branch_coverage=1 00:11:00.423 --rc genhtml_function_coverage=1 00:11:00.423 --rc genhtml_legend=1 00:11:00.423 --rc geninfo_all_blocks=1 00:11:00.423 --rc geninfo_unexecuted_blocks=1 00:11:00.423 00:11:00.423 ' 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.423 --rc genhtml_branch_coverage=1 00:11:00.423 --rc genhtml_function_coverage=1 00:11:00.423 --rc genhtml_legend=1 00:11:00.423 --rc geninfo_all_blocks=1 00:11:00.423 --rc geninfo_unexecuted_blocks=1 00:11:00.423 00:11:00.423 ' 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.423 --rc genhtml_branch_coverage=1 00:11:00.423 --rc genhtml_function_coverage=1 00:11:00.423 --rc genhtml_legend=1 00:11:00.423 --rc geninfo_all_blocks=1 00:11:00.423 --rc geninfo_unexecuted_blocks=1 00:11:00.423 00:11:00.423 ' 00:11:00.423 23:52:44 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.424 --rc genhtml_branch_coverage=1 00:11:00.424 --rc genhtml_function_coverage=1 00:11:00.424 --rc genhtml_legend=1 00:11:00.424 --rc geninfo_all_blocks=1 00:11:00.424 --rc geninfo_unexecuted_blocks=1 00:11:00.424 00:11:00.424 ' 00:11:00.424 23:52:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:00.424 23:52:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:00.424 23:52:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:00.424 23:52:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:00.424 23:52:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.424 23:52:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.424 23:52:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.424 ************************************ 00:11:00.424 START TEST default_locks 00:11:00.424 ************************************ 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=225417 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 225417 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 225417 ']' 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.424 23:52:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:00.424 [2024-12-09 23:52:44.832626] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:00.424 [2024-12-09 23:52:44.832672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225417 ] 00:11:00.684 [2024-12-09 23:52:44.919479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.684 [2024-12-09 23:52:44.959623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.251 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.251 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:01.251 23:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 225417 00:11:01.251 23:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 225417 00:11:01.251 23:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:01.510 lslocks: write error 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 225417 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 225417 ']' 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 225417 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225417 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.510 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.511 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225417' 00:11:01.511 killing process with pid 225417 00:11:01.511 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 225417 00:11:01.511 23:52:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 225417 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 225417 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 225417 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 225417 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 225417 ']' 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:02.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (225417) - No such process 00:11:02.080 ERROR: process (pid: 225417) is no longer running 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:02.080 00:11:02.080 real 0m1.505s 00:11:02.080 user 0m1.591s 00:11:02.080 sys 0m0.524s 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.080 23:52:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:02.080 ************************************ 00:11:02.080 END TEST default_locks 00:11:02.080 ************************************ 00:11:02.080 23:52:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:02.080 23:52:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.080 23:52:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.080 23:52:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:02.080 ************************************ 00:11:02.080 START TEST default_locks_via_rpc 00:11:02.080 ************************************ 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=225731 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 225731 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 225731 ']' 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.080 23:52:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.080 [2024-12-09 23:52:46.426483] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:02.080 [2024-12-09 23:52:46.426529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225731 ] 00:11:02.080 [2024-12-09 23:52:46.516792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.340 [2024-12-09 23:52:46.558065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 225731 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 225731 00:11:02.910 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 225731 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 225731 ']' 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 225731 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225731 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225731' 00:11:03.170 killing process with pid 225731 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 225731 00:11:03.170 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 225731 00:11:03.740 00:11:03.740 real 0m1.545s 00:11:03.740 user 0m1.621s 00:11:03.740 sys 0m0.548s 00:11:03.740 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.740 23:52:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.740 ************************************ 00:11:03.740 END TEST default_locks_via_rpc 00:11:03.740 ************************************ 00:11:03.740 23:52:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:03.740 23:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.740 23:52:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.740 23:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:03.740 ************************************ 00:11:03.740 START TEST non_locking_app_on_locked_coremask 00:11:03.740 ************************************ 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=226070 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 226070 /var/tmp/spdk.sock 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 226070 ']' 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.740 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:03.740 [2024-12-09 23:52:48.057341] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:03.740 [2024-12-09 23:52:48.057387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226070 ] 00:11:03.740 [2024-12-09 23:52:48.146510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.740 [2024-12-09 23:52:48.187092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=226207 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 226207 /var/tmp/spdk2.sock 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 226207 ']' 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:04.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.679 23:52:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.679 [2024-12-09 23:52:48.922053] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:04.679 [2024-12-09 23:52:48.922104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226207 ] 00:11:04.680 [2024-12-09 23:52:49.031635] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:04.680 [2024-12-09 23:52:49.031663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.680 [2024-12-09 23:52:49.113498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.619 23:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.619 23:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:05.619 23:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 226070 00:11:05.619 23:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 226070 00:11:05.619 23:52:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:06.189 lslocks: write error 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 226070 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 226070 ']' 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 226070 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226070 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.189 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226070' 00:11:06.189 killing process with pid 226070 00:11:06.190 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 226070 00:11:06.190 23:52:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 226070 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 226207 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 226207 ']' 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 226207 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226207 00:11:06.759 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.019 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.019 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226207' 00:11:07.019 killing process with pid 226207 00:11:07.019 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 226207 00:11:07.019 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 226207 00:11:07.280 00:11:07.280 real 0m3.531s 00:11:07.280 user 0m3.818s 00:11:07.280 sys 0m1.102s 00:11:07.280 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.280 23:52:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.280 ************************************ 00:11:07.280 END TEST non_locking_app_on_locked_coremask 00:11:07.280 ************************************ 00:11:07.280 23:52:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:07.280 23:52:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.280 23:52:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.280 23:52:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:07.280 ************************************ 00:11:07.280 START TEST locking_app_on_unlocked_coremask 00:11:07.280 ************************************ 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=226759 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 226759 /var/tmp/spdk.sock 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 226759 ']' 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.280 23:52:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.280 [2024-12-09 23:52:51.672323] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:07.280 [2024-12-09 23:52:51.672369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226759 ] 00:11:07.540 [2024-12-09 23:52:51.766728] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:07.540 [2024-12-09 23:52:51.766761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.540 [2024-12-09 23:52:51.803262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=226850 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 226850 /var/tmp/spdk2.sock 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 226850 ']' 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:08.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.110 23:52:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:08.110 [2024-12-09 23:52:52.568880] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:08.110 [2024-12-09 23:52:52.568933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226850 ] 00:11:08.370 [2024-12-09 23:52:52.679616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.370 [2024-12-09 23:52:52.759055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.310 23:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.310 23:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:09.310 23:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 226850 00:11:09.310 23:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 226850 00:11:09.310 23:52:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:09.880 lslocks: write error 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 226759 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 226759 ']' 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 226759 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.880 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226759 00:11:10.140 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.140 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.140 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226759' 00:11:10.140 killing process with pid 226759 00:11:10.140 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 226759 00:11:10.140 23:52:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 226759 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 226850 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 226850 ']' 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 226850 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226850 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226850' 00:11:10.710 killing process with pid 226850 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 226850 00:11:10.710 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 226850 00:11:10.970 00:11:10.970 real 0m3.752s 00:11:10.970 user 0m4.113s 00:11:10.970 sys 0m1.210s 00:11:10.970 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.970 23:52:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.970 ************************************ 00:11:10.970 END TEST locking_app_on_unlocked_coremask 00:11:10.970 ************************************ 00:11:10.970 23:52:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:10.970 23:52:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.970 23:52:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.970 23:52:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.229 ************************************ 00:11:11.229 START TEST locking_app_on_locked_coremask 00:11:11.229 ************************************ 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=227355 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 227355 /var/tmp/spdk.sock 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 227355 ']' 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.229 23:52:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.229 [2024-12-09 23:52:55.511203] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:11.229 [2024-12-09 23:52:55.511253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227355 ] 00:11:11.229 [2024-12-09 23:52:55.601111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.229 [2024-12-09 23:52:55.642376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=227600 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 227600 /var/tmp/spdk2.sock 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 227600 /var/tmp/spdk2.sock 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 227600 /var/tmp/spdk2.sock 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 227600 ']' 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:12.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.170 23:52:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.170 [2024-12-09 23:52:56.382648] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:12.170 [2024-12-09 23:52:56.382699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227600 ] 00:11:12.170 [2024-12-09 23:52:56.491933] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 227355 has claimed it. 00:11:12.170 [2024-12-09 23:52:56.491977] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:12.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (227600) - No such process 00:11:12.740 ERROR: process (pid: 227600) is no longer running 00:11:12.740 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.740 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:12.740 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:12.740 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.741 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.741 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.741 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 227355 00:11:12.741 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 227355 00:11:12.741 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:13.311 lslocks: write error 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 227355 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 227355 ']' 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 227355 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227355 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227355' 00:11:13.311 killing process with pid 227355 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 227355 00:11:13.311 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 227355 00:11:13.572 00:11:13.572 real 0m2.485s 00:11:13.572 user 0m2.717s 00:11:13.572 sys 0m0.785s 00:11:13.572 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.572 23:52:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:13.572 ************************************ 00:11:13.572 END TEST locking_app_on_locked_coremask 00:11:13.572 ************************************ 00:11:13.572 23:52:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:13.572 23:52:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:13.572 23:52:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.572 23:52:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:13.572 ************************************ 00:11:13.572 START TEST locking_overlapped_coremask 00:11:13.572 ************************************ 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=227904 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 227904 /var/tmp/spdk.sock 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 227904 ']' 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.572 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:13.832 [2024-12-09 23:52:58.079750] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:13.832 [2024-12-09 23:52:58.079796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227904 ] 00:11:13.832 [2024-12-09 23:52:58.170197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.832 [2024-12-09 23:52:58.214440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.832 [2024-12-09 23:52:58.214478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.832 [2024-12-09 23:52:58.214479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=228095 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 228095 /var/tmp/spdk2.sock 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 228095 /var/tmp/spdk2.sock 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 228095 /var/tmp/spdk2.sock 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 228095 ']' 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:14.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.770 23:52:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 [2024-12-09 23:52:58.955188] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:14.770 [2024-12-09 23:52:58.955241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228095 ] 00:11:14.770 [2024-12-09 23:52:59.068700] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 227904 has claimed it. 00:11:14.770 [2024-12-09 23:52:59.068738] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:15.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (228095) - No such process 00:11:15.340 ERROR: process (pid: 228095) is no longer running 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 227904 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 227904 ']' 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 227904 00:11:15.340 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227904 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227904' 00:11:15.341 killing process with pid 227904 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 227904 00:11:15.341 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 227904 00:11:15.600 00:11:15.600 real 0m1.951s 00:11:15.600 user 0m5.576s 00:11:15.600 sys 0m0.482s 00:11:15.600 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.600 23:52:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.600 ************************************ 00:11:15.600 END TEST locking_overlapped_coremask 00:11:15.600 ************************************ 00:11:15.600 23:53:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:15.600 23:53:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.600 23:53:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.600 23:53:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.600 ************************************ 00:11:15.600 START TEST locking_overlapped_coremask_via_rpc 00:11:15.600 ************************************ 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=228212 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 228212 /var/tmp/spdk.sock 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 228212 ']' 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.600 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 [2024-12-09 23:53:00.115544] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:15.858 [2024-12-09 23:53:00.115597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228212 ] 00:11:15.858 [2024-12-09 23:53:00.208380] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:15.858 [2024-12-09 23:53:00.208410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.858 [2024-12-09 23:53:00.252189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.858 [2024-12-09 23:53:00.252296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.858 [2024-12-09 23:53:00.252297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.794 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.794 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:16.794 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:16.794 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=228468 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 228468 /var/tmp/spdk2.sock 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 228468 ']' 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:16.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.795 23:53:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.795 [2024-12-09 23:53:00.982393] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:16.795 [2024-12-09 23:53:00.982440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228468 ] 00:11:16.795 [2024-12-09 23:53:01.092997] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:16.795 [2024-12-09 23:53:01.093027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.795 [2024-12-09 23:53:01.178944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.795 [2024-12-09 23:53:01.179062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.795 [2024-12-09 23:53:01.179063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.362 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.621 [2024-12-09 23:53:01.840917] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 228212 has claimed it. 00:11:17.621 request: 00:11:17.621 { 00:11:17.621 "method": "framework_enable_cpumask_locks", 00:11:17.621 "req_id": 1 00:11:17.621 } 00:11:17.621 Got JSON-RPC error response 00:11:17.621 response: 00:11:17.621 { 00:11:17.621 "code": -32603, 00:11:17.621 "message": "Failed to claim CPU core: 2" 00:11:17.621 } 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 228212 /var/tmp/spdk.sock 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 228212 ']' 00:11:17.621 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.622 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.622 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.622 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.622 23:53:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 228468 /var/tmp/spdk2.sock 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 228468 ']' 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:17.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.622 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:17.880 00:11:17.880 real 0m2.208s 00:11:17.880 user 0m0.919s 00:11:17.880 sys 0m0.223s 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.880 23:53:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.880 ************************************ 00:11:17.880 END TEST locking_overlapped_coremask_via_rpc 00:11:17.880 ************************************ 00:11:17.880 23:53:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:17.880 23:53:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 228212 ]] 00:11:17.880 23:53:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 228212 00:11:17.880 23:53:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 228212 ']' 00:11:17.880 23:53:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 228212 00:11:17.880 23:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:17.880 23:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.880 23:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228212 00:11:18.139 23:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.139 23:53:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.139 23:53:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228212' 00:11:18.139 killing process with pid 228212 00:11:18.139 23:53:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 228212 00:11:18.139 23:53:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 228212 00:11:18.399 23:53:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 228468 ]] 00:11:18.399 23:53:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 228468 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 228468 ']' 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 228468 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228468 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228468' 00:11:18.399 killing process with pid 228468 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 228468 00:11:18.399 23:53:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 228468 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 228212 ]] 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 228212 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 228212 ']' 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 228212 00:11:18.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (228212) - No such process 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 228212 is not found' 00:11:18.659 Process with pid 228212 is not found 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 228468 ]] 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 228468 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 228468 ']' 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 228468 00:11:18.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (228468) - No such process 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 228468 is not found' 00:11:18.659 Process with pid 228468 is not found 00:11:18.659 23:53:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:18.659 00:11:18.659 real 0m18.525s 00:11:18.659 user 0m31.673s 00:11:18.659 sys 0m5.994s 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.659 23:53:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.659 ************************************ 00:11:18.659 END TEST cpu_locks 00:11:18.659 ************************************ 00:11:18.659 00:11:18.659 real 0m44.260s 00:11:18.659 user 1m23.751s 00:11:18.659 sys 0m10.279s 00:11:18.659 23:53:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.659 23:53:03 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.659 ************************************ 00:11:18.659 END TEST event 00:11:18.659 ************************************ 00:11:18.919 23:53:03 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:18.919 23:53:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.919 23:53:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.919 23:53:03 -- common/autotest_common.sh@10 -- # set +x 00:11:18.919 ************************************ 00:11:18.919 START TEST thread 00:11:18.919 ************************************ 00:11:18.919 23:53:03 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:18.919 * Looking for test storage... 00:11:18.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:11:18.919 23:53:03 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.919 23:53:03 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.919 23:53:03 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.919 23:53:03 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.919 23:53:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.919 23:53:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.919 23:53:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.919 23:53:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.919 23:53:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.919 23:53:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.919 23:53:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.919 23:53:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.919 23:53:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.919 23:53:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.919 23:53:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.919 23:53:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:18.919 23:53:03 thread -- scripts/common.sh@345 -- # : 1 00:11:18.919 23:53:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.919 23:53:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.919 23:53:03 thread -- scripts/common.sh@365 -- # decimal 1 00:11:18.919 23:53:03 thread -- scripts/common.sh@353 -- # local d=1 00:11:18.919 23:53:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.919 23:53:03 thread -- scripts/common.sh@355 -- # echo 1 00:11:18.919 23:53:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.919 23:53:03 thread -- scripts/common.sh@366 -- # decimal 2 00:11:19.179 23:53:03 thread -- scripts/common.sh@353 -- # local d=2 00:11:19.179 23:53:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.179 23:53:03 thread -- scripts/common.sh@355 -- # echo 2 00:11:19.179 23:53:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.179 23:53:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.179 23:53:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.179 23:53:03 thread -- scripts/common.sh@368 -- # return 0 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.179 --rc genhtml_branch_coverage=1 00:11:19.179 --rc genhtml_function_coverage=1 00:11:19.179 --rc genhtml_legend=1 00:11:19.179 --rc geninfo_all_blocks=1 00:11:19.179 --rc geninfo_unexecuted_blocks=1 00:11:19.179 00:11:19.179 ' 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.179 --rc genhtml_branch_coverage=1 00:11:19.179 --rc genhtml_function_coverage=1 00:11:19.179 --rc genhtml_legend=1 00:11:19.179 --rc geninfo_all_blocks=1 00:11:19.179 --rc geninfo_unexecuted_blocks=1 00:11:19.179 00:11:19.179 ' 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.179 --rc genhtml_branch_coverage=1 00:11:19.179 --rc genhtml_function_coverage=1 00:11:19.179 --rc genhtml_legend=1 00:11:19.179 --rc geninfo_all_blocks=1 00:11:19.179 --rc geninfo_unexecuted_blocks=1 00:11:19.179 00:11:19.179 ' 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.179 --rc genhtml_branch_coverage=1 00:11:19.179 --rc genhtml_function_coverage=1 00:11:19.179 --rc genhtml_legend=1 00:11:19.179 --rc geninfo_all_blocks=1 00:11:19.179 --rc geninfo_unexecuted_blocks=1 00:11:19.179 00:11:19.179 ' 00:11:19.179 23:53:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.179 23:53:03 thread -- common/autotest_common.sh@10 -- # set +x 00:11:19.179 ************************************ 00:11:19.179 START TEST thread_poller_perf 00:11:19.179 ************************************ 00:11:19.179 23:53:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:19.179 [2024-12-09 23:53:03.464041] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:19.179 [2024-12-09 23:53:03.464104] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228990 ] 00:11:19.179 [2024-12-09 23:53:03.555187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.179 [2024-12-09 23:53:03.593506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.179 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:20.560 [2024-12-09T22:53:05.033Z] ====================================== 00:11:20.560 [2024-12-09T22:53:05.033Z] busy:2509793612 (cyc) 00:11:20.560 [2024-12-09T22:53:05.033Z] total_run_count: 434000 00:11:20.560 [2024-12-09T22:53:05.033Z] tsc_hz: 2500000000 (cyc) 00:11:20.560 [2024-12-09T22:53:05.033Z] ====================================== 00:11:20.560 [2024-12-09T22:53:05.033Z] poller_cost: 5782 (cyc), 2312 (nsec) 00:11:20.560 00:11:20.560 real 0m1.194s 00:11:20.560 user 0m1.105s 00:11:20.560 sys 0m0.085s 00:11:20.560 23:53:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.560 23:53:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:20.560 ************************************ 00:11:20.560 END TEST thread_poller_perf 00:11:20.560 ************************************ 00:11:20.560 23:53:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:20.560 23:53:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:20.560 23:53:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.560 23:53:04 thread -- common/autotest_common.sh@10 -- # set +x 00:11:20.560 ************************************ 00:11:20.560 START TEST thread_poller_perf 00:11:20.560 ************************************ 00:11:20.560 23:53:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:20.560 [2024-12-09 23:53:04.745231] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:20.560 [2024-12-09 23:53:04.745315] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229146 ] 00:11:20.560 [2024-12-09 23:53:04.839599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.560 [2024-12-09 23:53:04.879651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.560 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:21.504 [2024-12-09T22:53:05.977Z] ====================================== 00:11:21.504 [2024-12-09T22:53:05.977Z] busy:2501904802 (cyc) 00:11:21.504 [2024-12-09T22:53:05.977Z] total_run_count: 5102000 00:11:21.504 [2024-12-09T22:53:05.977Z] tsc_hz: 2500000000 (cyc) 00:11:21.504 [2024-12-09T22:53:05.977Z] ====================================== 00:11:21.504 [2024-12-09T22:53:05.977Z] poller_cost: 490 (cyc), 196 (nsec) 00:11:21.504 00:11:21.504 real 0m1.195s 00:11:21.504 user 0m1.098s 00:11:21.504 sys 0m0.092s 00:11:21.504 23:53:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.504 23:53:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:21.504 ************************************ 00:11:21.504 END TEST thread_poller_perf 00:11:21.504 ************************************ 00:11:21.504 23:53:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:21.504 00:11:21.504 real 0m2.758s 00:11:21.504 user 0m2.381s 00:11:21.504 sys 0m0.399s 00:11:21.504 23:53:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.504 23:53:05 thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.504 ************************************ 00:11:21.504 END TEST thread 00:11:21.504 ************************************ 00:11:21.764 23:53:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:21.764 23:53:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:21.764 23:53:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.764 23:53:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.764 23:53:05 -- common/autotest_common.sh@10 -- # set +x 00:11:21.764 ************************************ 00:11:21.764 START TEST app_cmdline 00:11:21.764 ************************************ 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:21.764 * Looking for test storage... 00:11:21.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.764 23:53:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:21.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.764 --rc genhtml_branch_coverage=1 00:11:21.764 --rc genhtml_function_coverage=1 00:11:21.764 --rc genhtml_legend=1 00:11:21.764 --rc geninfo_all_blocks=1 00:11:21.764 --rc geninfo_unexecuted_blocks=1 00:11:21.764 00:11:21.764 ' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:21.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.764 --rc genhtml_branch_coverage=1 00:11:21.764 --rc genhtml_function_coverage=1 00:11:21.764 --rc genhtml_legend=1 00:11:21.764 --rc geninfo_all_blocks=1 00:11:21.764 --rc geninfo_unexecuted_blocks=1 00:11:21.764 00:11:21.764 ' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:21.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.764 --rc genhtml_branch_coverage=1 00:11:21.764 --rc genhtml_function_coverage=1 00:11:21.764 --rc genhtml_legend=1 00:11:21.764 --rc geninfo_all_blocks=1 00:11:21.764 --rc geninfo_unexecuted_blocks=1 00:11:21.764 00:11:21.764 ' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:21.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.764 --rc genhtml_branch_coverage=1 00:11:21.764 --rc genhtml_function_coverage=1 00:11:21.764 --rc genhtml_legend=1 00:11:21.764 --rc geninfo_all_blocks=1 00:11:21.764 --rc geninfo_unexecuted_blocks=1 00:11:21.764 00:11:21.764 ' 00:11:21.764 23:53:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:21.764 23:53:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=229477 00:11:21.764 23:53:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 229477 00:11:21.764 23:53:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 229477 ']' 00:11:21.764 23:53:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.025 23:53:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.025 23:53:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.025 23:53:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.025 23:53:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:22.025 [2024-12-09 23:53:06.292385] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:22.025 [2024-12-09 23:53:06.292437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229477 ] 00:11:22.025 [2024-12-09 23:53:06.381046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.025 [2024-12-09 23:53:06.424292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:22.966 { 00:11:22.966 "version": "SPDK v25.01-pre git sha1 969b360d9", 00:11:22.966 "fields": { 00:11:22.966 "major": 25, 00:11:22.966 "minor": 1, 00:11:22.966 "patch": 0, 00:11:22.966 "suffix": "-pre", 00:11:22.966 "commit": "969b360d9" 00:11:22.966 } 00:11:22.966 } 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:22.966 23:53:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:22.966 23:53:07 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:23.226 request: 00:11:23.226 { 00:11:23.226 "method": "env_dpdk_get_mem_stats", 00:11:23.226 "req_id": 1 00:11:23.226 } 00:11:23.226 Got JSON-RPC error response 00:11:23.226 response: 00:11:23.226 { 00:11:23.226 "code": -32601, 00:11:23.226 "message": "Method not found" 00:11:23.226 } 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.226 23:53:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 229477 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 229477 ']' 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 229477 00:11:23.226 23:53:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 229477 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 229477' 00:11:23.227 killing process with pid 229477 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 229477 00:11:23.227 23:53:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 229477 00:11:23.487 00:11:23.487 real 0m1.856s 00:11:23.487 user 0m2.171s 00:11:23.487 sys 0m0.528s 00:11:23.487 23:53:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.487 23:53:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 ************************************ 00:11:23.487 END TEST app_cmdline 00:11:23.487 ************************************ 00:11:23.487 23:53:07 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:23.487 23:53:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.487 23:53:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.487 23:53:07 -- common/autotest_common.sh@10 -- # set +x 00:11:23.747 ************************************ 00:11:23.748 START TEST version 00:11:23.748 ************************************ 00:11:23.748 23:53:07 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:23.748 * Looking for test storage... 00:11:23.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:23.748 23:53:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.748 23:53:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.748 23:53:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.748 23:53:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.748 23:53:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.748 23:53:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.748 23:53:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.748 23:53:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.748 23:53:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.748 23:53:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.748 23:53:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.748 23:53:08 version -- scripts/common.sh@344 -- # case "$op" in 00:11:23.748 23:53:08 version -- scripts/common.sh@345 -- # : 1 00:11:23.748 23:53:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.748 23:53:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.748 23:53:08 version -- scripts/common.sh@365 -- # decimal 1 00:11:23.748 23:53:08 version -- scripts/common.sh@353 -- # local d=1 00:11:23.748 23:53:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.748 23:53:08 version -- scripts/common.sh@355 -- # echo 1 00:11:23.748 23:53:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.748 23:53:08 version -- scripts/common.sh@366 -- # decimal 2 00:11:23.748 23:53:08 version -- scripts/common.sh@353 -- # local d=2 00:11:23.748 23:53:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.748 23:53:08 version -- scripts/common.sh@355 -- # echo 2 00:11:23.748 23:53:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.748 23:53:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.748 23:53:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.748 23:53:08 version -- scripts/common.sh@368 -- # return 0 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:23.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.748 --rc genhtml_branch_coverage=1 00:11:23.748 --rc genhtml_function_coverage=1 00:11:23.748 --rc genhtml_legend=1 00:11:23.748 --rc geninfo_all_blocks=1 00:11:23.748 --rc geninfo_unexecuted_blocks=1 00:11:23.748 00:11:23.748 ' 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:23.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.748 --rc genhtml_branch_coverage=1 00:11:23.748 --rc genhtml_function_coverage=1 00:11:23.748 --rc genhtml_legend=1 00:11:23.748 --rc geninfo_all_blocks=1 00:11:23.748 --rc geninfo_unexecuted_blocks=1 00:11:23.748 00:11:23.748 ' 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:23.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.748 --rc genhtml_branch_coverage=1 00:11:23.748 --rc genhtml_function_coverage=1 00:11:23.748 --rc genhtml_legend=1 00:11:23.748 --rc geninfo_all_blocks=1 00:11:23.748 --rc geninfo_unexecuted_blocks=1 00:11:23.748 00:11:23.748 ' 00:11:23.748 23:53:08 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:23.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.748 --rc genhtml_branch_coverage=1 00:11:23.748 --rc genhtml_function_coverage=1 00:11:23.748 --rc genhtml_legend=1 00:11:23.748 --rc geninfo_all_blocks=1 00:11:23.748 --rc geninfo_unexecuted_blocks=1 00:11:23.748 00:11:23.748 ' 00:11:23.748 23:53:08 version -- app/version.sh@17 -- # get_header_version major 00:11:23.748 23:53:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # cut -f2 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.748 23:53:08 version -- app/version.sh@17 -- # major=25 00:11:23.748 23:53:08 version -- app/version.sh@18 -- # get_header_version minor 00:11:23.748 23:53:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # cut -f2 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.748 23:53:08 version -- app/version.sh@18 -- # minor=1 00:11:23.748 23:53:08 version -- app/version.sh@19 -- # get_header_version patch 00:11:23.748 23:53:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # cut -f2 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.748 23:53:08 version -- app/version.sh@19 -- # patch=0 00:11:23.748 23:53:08 version -- app/version.sh@20 -- # get_header_version suffix 00:11:23.748 23:53:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # cut -f2 00:11:23.748 23:53:08 version -- app/version.sh@14 -- # tr -d '"' 00:11:23.748 23:53:08 version -- app/version.sh@20 -- # suffix=-pre 00:11:23.748 23:53:08 version -- app/version.sh@22 -- # version=25.1 00:11:23.748 23:53:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:23.748 23:53:08 version -- app/version.sh@28 -- # version=25.1rc0 00:11:23.748 23:53:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:23.748 23:53:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:24.008 23:53:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:24.008 23:53:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:24.008 00:11:24.008 real 0m0.276s 00:11:24.008 user 0m0.145s 00:11:24.008 sys 0m0.187s 00:11:24.008 23:53:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.008 23:53:08 version -- common/autotest_common.sh@10 -- # set +x 00:11:24.009 ************************************ 00:11:24.009 END TEST version 00:11:24.009 ************************************ 00:11:24.009 23:53:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:24.009 23:53:08 -- spdk/autotest.sh@194 -- # uname -s 00:11:24.009 23:53:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:24.009 23:53:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:24.009 23:53:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:24.009 23:53:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:24.009 23:53:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.009 23:53:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.009 23:53:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:24.009 23:53:08 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:24.009 23:53:08 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:24.009 23:53:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.009 23:53:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.009 23:53:08 -- common/autotest_common.sh@10 -- # set +x 00:11:24.009 ************************************ 00:11:24.009 START TEST nvmf_tcp 00:11:24.009 ************************************ 00:11:24.009 23:53:08 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:24.009 * Looking for test storage... 00:11:24.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:24.268 23:53:08 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.268 23:53:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.268 23:53:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.268 23:53:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.268 23:53:08 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.268 23:53:08 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.269 23:53:08 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.269 --rc genhtml_branch_coverage=1 00:11:24.269 --rc genhtml_function_coverage=1 00:11:24.269 --rc genhtml_legend=1 00:11:24.269 --rc geninfo_all_blocks=1 00:11:24.269 --rc geninfo_unexecuted_blocks=1 00:11:24.269 00:11:24.269 ' 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.269 --rc genhtml_branch_coverage=1 00:11:24.269 --rc genhtml_function_coverage=1 00:11:24.269 --rc genhtml_legend=1 00:11:24.269 --rc geninfo_all_blocks=1 00:11:24.269 --rc geninfo_unexecuted_blocks=1 00:11:24.269 00:11:24.269 ' 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.269 --rc genhtml_branch_coverage=1 00:11:24.269 --rc genhtml_function_coverage=1 00:11:24.269 --rc genhtml_legend=1 00:11:24.269 --rc geninfo_all_blocks=1 00:11:24.269 --rc geninfo_unexecuted_blocks=1 00:11:24.269 00:11:24.269 ' 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.269 --rc genhtml_branch_coverage=1 00:11:24.269 --rc genhtml_function_coverage=1 00:11:24.269 --rc genhtml_legend=1 00:11:24.269 --rc geninfo_all_blocks=1 00:11:24.269 --rc geninfo_unexecuted_blocks=1 00:11:24.269 00:11:24.269 ' 00:11:24.269 23:53:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:24.269 23:53:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:24.269 23:53:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.269 23:53:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.269 ************************************ 00:11:24.269 START TEST nvmf_target_core 00:11:24.269 ************************************ 00:11:24.269 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:24.269 * Looking for test storage... 00:11:24.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:24.269 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.269 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.269 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.530 --rc genhtml_branch_coverage=1 00:11:24.530 --rc genhtml_function_coverage=1 00:11:24.530 --rc genhtml_legend=1 00:11:24.530 --rc geninfo_all_blocks=1 00:11:24.530 --rc geninfo_unexecuted_blocks=1 00:11:24.530 00:11:24.530 ' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.530 --rc genhtml_branch_coverage=1 00:11:24.530 --rc genhtml_function_coverage=1 00:11:24.530 --rc genhtml_legend=1 00:11:24.530 --rc geninfo_all_blocks=1 00:11:24.530 --rc geninfo_unexecuted_blocks=1 00:11:24.530 00:11:24.530 ' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.530 --rc genhtml_branch_coverage=1 00:11:24.530 --rc genhtml_function_coverage=1 00:11:24.530 --rc genhtml_legend=1 00:11:24.530 --rc geninfo_all_blocks=1 00:11:24.530 --rc geninfo_unexecuted_blocks=1 00:11:24.530 00:11:24.530 ' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.530 --rc genhtml_branch_coverage=1 00:11:24.530 --rc genhtml_function_coverage=1 00:11:24.530 --rc genhtml_legend=1 00:11:24.530 --rc geninfo_all_blocks=1 00:11:24.530 --rc geninfo_unexecuted_blocks=1 00:11:24.530 00:11:24.530 ' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:24.530 23:53:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.531 ************************************ 00:11:24.531 START TEST nvmf_abort 00:11:24.531 ************************************ 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:24.531 * Looking for test storage... 00:11:24.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.531 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.792 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.793 --rc genhtml_branch_coverage=1 00:11:24.793 --rc genhtml_function_coverage=1 00:11:24.793 --rc genhtml_legend=1 00:11:24.793 --rc geninfo_all_blocks=1 00:11:24.793 --rc geninfo_unexecuted_blocks=1 00:11:24.793 00:11:24.793 ' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.793 --rc genhtml_branch_coverage=1 00:11:24.793 --rc genhtml_function_coverage=1 00:11:24.793 --rc genhtml_legend=1 00:11:24.793 --rc geninfo_all_blocks=1 00:11:24.793 --rc geninfo_unexecuted_blocks=1 00:11:24.793 00:11:24.793 ' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.793 --rc genhtml_branch_coverage=1 00:11:24.793 --rc genhtml_function_coverage=1 00:11:24.793 --rc genhtml_legend=1 00:11:24.793 --rc geninfo_all_blocks=1 00:11:24.793 --rc geninfo_unexecuted_blocks=1 00:11:24.793 00:11:24.793 ' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.793 --rc genhtml_branch_coverage=1 00:11:24.793 --rc genhtml_function_coverage=1 00:11:24.793 --rc genhtml_legend=1 00:11:24.793 --rc geninfo_all_blocks=1 00:11:24.793 --rc geninfo_unexecuted_blocks=1 00:11:24.793 00:11:24.793 ' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.793 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.926 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:32.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:32.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:32.927 Found net devices under 0000:af:00.0: cvl_0_0 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:32.927 Found net devices under 0000:af:00.1: cvl_0_1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:11:32.927 00:11:32.927 --- 10.0.0.2 ping statistics --- 00:11:32.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.927 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:11:32.927 00:11:32.927 --- 10.0.0.1 ping statistics --- 00:11:32.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.927 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:32.927 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=233570 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 233570 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 233570 ']' 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.928 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:32.928 [2024-12-09 23:53:16.596684] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:32.928 [2024-12-09 23:53:16.596727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.928 [2024-12-09 23:53:16.689878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.928 [2024-12-09 23:53:16.729346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.928 [2024-12-09 23:53:16.729383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.928 [2024-12-09 23:53:16.729391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.928 [2024-12-09 23:53:16.729399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.928 [2024-12-09 23:53:16.729406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.928 [2024-12-09 23:53:16.730927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.928 [2024-12-09 23:53:16.731014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.928 [2024-12-09 23:53:16.731015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.185 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.185 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 [2024-12-09 23:53:17.486375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 Malloc0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 Delay0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 [2024-12-09 23:53:17.560570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.186 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:33.444 [2024-12-09 23:53:17.697671] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:35.360 Initializing NVMe Controllers 00:11:35.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:35.360 controller IO queue size 128 less than required 00:11:35.360 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:35.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:35.360 Initialization complete. Launching workers. 00:11:35.360 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37129 00:11:35.360 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37190, failed to submit 62 00:11:35.360 success 37133, unsuccessful 57, failed 0 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.360 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:11:35.361 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.361 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:11:35.361 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.361 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.361 rmmod nvme_tcp 00:11:35.361 rmmod nvme_fabrics 00:11:35.361 rmmod nvme_keyring 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 233570 ']' 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 233570 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 233570 ']' 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 233570 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233570 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233570' 00:11:35.623 killing process with pid 233570 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 233570 00:11:35.623 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 233570 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.889 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.853 00:11:37.853 real 0m13.303s 00:11:37.853 user 0m14.096s 00:11:37.853 sys 0m6.550s 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:37.853 ************************************ 00:11:37.853 END TEST nvmf_abort 00:11:37.853 ************************************ 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.853 ************************************ 00:11:37.853 START TEST nvmf_ns_hotplug_stress 00:11:37.853 ************************************ 00:11:37.853 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:38.126 * Looking for test storage... 00:11:38.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.126 --rc genhtml_branch_coverage=1 00:11:38.126 --rc genhtml_function_coverage=1 00:11:38.126 --rc genhtml_legend=1 00:11:38.126 --rc geninfo_all_blocks=1 00:11:38.126 --rc geninfo_unexecuted_blocks=1 00:11:38.126 00:11:38.126 ' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.126 --rc genhtml_branch_coverage=1 00:11:38.126 --rc genhtml_function_coverage=1 00:11:38.126 --rc genhtml_legend=1 00:11:38.126 --rc geninfo_all_blocks=1 00:11:38.126 --rc geninfo_unexecuted_blocks=1 00:11:38.126 00:11:38.126 ' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.126 --rc genhtml_branch_coverage=1 00:11:38.126 --rc genhtml_function_coverage=1 00:11:38.126 --rc genhtml_legend=1 00:11:38.126 --rc geninfo_all_blocks=1 00:11:38.126 --rc geninfo_unexecuted_blocks=1 00:11:38.126 00:11:38.126 ' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.126 --rc genhtml_branch_coverage=1 00:11:38.126 --rc genhtml_function_coverage=1 00:11:38.126 --rc genhtml_legend=1 00:11:38.126 --rc geninfo_all_blocks=1 00:11:38.126 --rc geninfo_unexecuted_blocks=1 00:11:38.126 00:11:38.126 ' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.126 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.127 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:46.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:46.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:46.487 Found net devices under 0000:af:00.0: cvl_0_0 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:46.487 Found net devices under 0000:af:00.1: cvl_0_1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.487 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:11:46.488 00:11:46.488 --- 10.0.0.2 ping statistics --- 00:11:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.488 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:11:46.488 00:11:46.488 --- 10.0.0.1 ping statistics --- 00:11:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.488 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=237889 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 237889 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 237889 ']' 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.488 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.488 [2024-12-09 23:53:29.911169] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:11:46.488 [2024-12-09 23:53:29.911216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.488 [2024-12-09 23:53:30.008017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.488 [2024-12-09 23:53:30.052198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.488 [2024-12-09 23:53:30.052236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.488 [2024-12-09 23:53:30.052246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.488 [2024-12-09 23:53:30.052255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.488 [2024-12-09 23:53:30.052263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.488 [2024-12-09 23:53:30.053665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.488 [2024-12-09 23:53:30.053772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.488 [2024-12-09 23:53:30.053773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:46.488 [2024-12-09 23:53:30.376450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.488 [2024-12-09 23:53:30.782098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.488 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.771 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:46.771 Malloc0 00:11:46.771 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:47.047 Delay0 00:11:47.047 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.327 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:47.327 NULL1 00:11:47.608 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:47.608 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=238402 00:11:47.608 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:47.608 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:47.608 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.876 Read completed with error (sct=0, sc=11) 00:11:47.876 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.154 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:48.154 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:48.154 true 00:11:48.418 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:48.418 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.988 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.247 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:49.247 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:49.508 true 00:11:49.508 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:49.508 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.767 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.767 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:49.767 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:50.027 true 00:11:50.027 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:50.027 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.287 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.548 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:50.548 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:50.548 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:50.548 true 00:11:50.809 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:50.809 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.379 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.640 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:51.640 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:51.900 true 00:11:51.900 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:51.900 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.159 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.422 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:52.422 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:52.422 true 00:11:52.422 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:52.422 23:53:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.805 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:53.805 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:54.065 true 00:11:54.065 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:54.065 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.012 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.012 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:55.012 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:55.271 true 00:11:55.271 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:55.271 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.531 23:53:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.531 23:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:55.791 23:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:55.791 true 00:11:55.791 23:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:55.791 23:53:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 23:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 23:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:57.172 23:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:57.432 true 00:11:57.432 23:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:57.432 23:53:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.370 23:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.370 23:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:58.370 23:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:58.630 true 00:11:58.630 23:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:58.630 23:53:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.890 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.890 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:58.890 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:59.149 true 00:11:59.149 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:11:59.149 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.530 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:00.530 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:00.790 true 00:12:00.790 23:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:00.790 23:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.729 23:53:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.729 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:01.729 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:01.988 true 00:12:01.988 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:01.988 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.248 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.248 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:02.248 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:02.507 true 00:12:02.507 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:02.507 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 23:53:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.890 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:03.890 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:03.890 true 00:12:04.149 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:04.149 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.718 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.979 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:04.979 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:05.239 true 00:12:05.240 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:05.240 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.499 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.499 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:05.499 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:05.760 true 00:12:05.760 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:05.760 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.141 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:07.141 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:07.401 true 00:12:07.401 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:07.401 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.340 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.340 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:08.340 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:08.599 true 00:12:08.599 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:08.599 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.859 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.119 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:09.119 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:09.119 true 00:12:09.119 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:09.119 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.519 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:10.519 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:10.778 true 00:12:10.778 23:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:10.778 23:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.718 23:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.718 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:11.718 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:11.978 true 00:12:11.978 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:11.978 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.237 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.496 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:12.496 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:12.496 true 00:12:12.496 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:12.496 23:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.897 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:13.897 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:14.156 true 00:12:14.156 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:14.156 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.095 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.095 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:15.095 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:15.354 true 00:12:15.354 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:15.354 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.614 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.873 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:15.873 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:15.873 true 00:12:15.873 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:15.873 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.271 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:17.271 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:17.533 true 00:12:17.533 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:17.533 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.471 Initializing NVMe Controllers 00:12:18.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:18.471 Controller IO queue size 128, less than required. 00:12:18.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:18.471 Controller IO queue size 128, less than required. 00:12:18.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:18.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:18.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:18.471 Initialization complete. Launching workers. 00:12:18.471 ======================================================== 00:12:18.471 Latency(us) 00:12:18.471 Device Information : IOPS MiB/s Average min max 00:12:18.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2533.33 1.24 35501.96 1702.01 1025083.73 00:12:18.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18516.80 9.04 6912.37 2127.88 435547.35 00:12:18.471 ======================================================== 00:12:18.471 Total : 21050.13 10.28 10353.06 1702.01 1025083.73 00:12:18.471 00:12:18.471 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.471 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:18.471 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:18.731 true 00:12:18.731 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 238402 00:12:18.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (238402) - No such process 00:12:18.731 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 238402 00:12:18.731 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.991 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:19.251 null0 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:19.251 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:19.511 null1 00:12:19.511 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:19.511 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:19.511 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:19.772 null2 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:19.772 null3 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:19.772 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:20.032 null4 00:12:20.032 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:20.032 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:20.032 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:20.291 null5 00:12:20.291 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:20.291 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:20.291 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:20.551 null6 00:12:20.551 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:20.551 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:20.551 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:20.811 null7 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.811 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 244476 244479 244481 244484 244487 244491 244493 244496 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:20.812 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.073 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:21.333 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.593 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:21.593 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.853 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:21.854 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:22.113 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.371 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.630 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:22.630 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.631 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.889 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:22.890 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:22.890 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:22.890 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:22.890 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:23.147 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.147 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.148 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.406 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.665 23:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:23.665 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.665 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:23.665 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:23.665 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:23.665 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:23.924 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.183 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:24.442 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:24.700 23:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:24.701 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.701 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.701 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.701 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.959 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.960 rmmod nvme_tcp 00:12:24.960 rmmod nvme_fabrics 00:12:24.960 rmmod nvme_keyring 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 237889 ']' 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 237889 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 237889 ']' 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 237889 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 237889 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 237889' 00:12:24.960 killing process with pid 237889 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 237889 00:12:24.960 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 237889 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.220 23:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.131 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.131 00:12:27.131 real 0m49.305s 00:12:27.131 user 3m11.949s 00:12:27.131 sys 0m19.952s 00:12:27.131 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.131 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.131 ************************************ 00:12:27.131 END TEST nvmf_ns_hotplug_stress 00:12:27.131 ************************************ 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.392 ************************************ 00:12:27.392 START TEST nvmf_delete_subsystem 00:12:27.392 ************************************ 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:27.392 * Looking for test storage... 00:12:27.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.392 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.653 --rc genhtml_branch_coverage=1 00:12:27.653 --rc genhtml_function_coverage=1 00:12:27.653 --rc genhtml_legend=1 00:12:27.653 --rc geninfo_all_blocks=1 00:12:27.653 --rc geninfo_unexecuted_blocks=1 00:12:27.653 00:12:27.653 ' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.653 --rc genhtml_branch_coverage=1 00:12:27.653 --rc genhtml_function_coverage=1 00:12:27.653 --rc genhtml_legend=1 00:12:27.653 --rc geninfo_all_blocks=1 00:12:27.653 --rc geninfo_unexecuted_blocks=1 00:12:27.653 00:12:27.653 ' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.653 --rc genhtml_branch_coverage=1 00:12:27.653 --rc genhtml_function_coverage=1 00:12:27.653 --rc genhtml_legend=1 00:12:27.653 --rc geninfo_all_blocks=1 00:12:27.653 --rc geninfo_unexecuted_blocks=1 00:12:27.653 00:12:27.653 ' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:27.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.653 --rc genhtml_branch_coverage=1 00:12:27.653 --rc genhtml_function_coverage=1 00:12:27.653 --rc genhtml_legend=1 00:12:27.653 --rc geninfo_all_blocks=1 00:12:27.653 --rc geninfo_unexecuted_blocks=1 00:12:27.653 00:12:27.653 ' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.653 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.654 23:54:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.790 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.791 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.791 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.791 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.791 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.791 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:12:35.791 00:12:35.791 --- 10.0.0.2 ping statistics --- 00:12:35.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.791 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:12:35.791 00:12:35.791 --- 10.0.0.1 ping statistics --- 00:12:35.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.791 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=249285 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 249285 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 249285 ']' 00:12:35.791 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.792 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.792 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.792 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.792 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 [2024-12-09 23:54:19.311982] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:12:35.792 [2024-12-09 23:54:19.312026] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.792 [2024-12-09 23:54:19.408358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:35.792 [2024-12-09 23:54:19.445177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.792 [2024-12-09 23:54:19.445215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.792 [2024-12-09 23:54:19.445224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.792 [2024-12-09 23:54:19.445233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.792 [2024-12-09 23:54:19.445240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.792 [2024-12-09 23:54:19.446487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.792 [2024-12-09 23:54:19.446488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 [2024-12-09 23:54:20.201436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 [2024-12-09 23:54:20.221642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 NULL1 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 Delay0 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=249538 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:35.792 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:36.051 [2024-12-09 23:54:20.343584] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:37.953 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.953 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.953 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 [2024-12-09 23:54:22.458203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1733ae0 is same with the state(6) to be set 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Write completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 starting I/O failed: -6 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.212 Read completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 starting I/O failed: -6 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 [2024-12-09 23:54:22.463338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5b400d500 is same with the state(6) to be set 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:38.213 Write completed with error (sct=0, sc=8) 00:12:38.213 Read completed with error (sct=0, sc=8) 00:12:39.150 [2024-12-09 23:54:23.436616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1733720 is same with the state(6) to be set 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 [2024-12-09 23:54:23.461663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1732410 is same with the state(6) to be set 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 [2024-12-09 23:54:23.461833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1733900 is same with the state(6) to be set 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 [2024-12-09 23:54:23.464525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5b400d050 is same with the state(6) to be set 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Read completed with error (sct=0, sc=8) 00:12:39.150 Write completed with error (sct=0, sc=8) 00:12:39.150 [2024-12-09 23:54:23.464922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa5b400d830 is same with the state(6) to be set 00:12:39.150 Initializing NVMe Controllers 00:12:39.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:39.150 Controller IO queue size 128, less than required. 00:12:39.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:39.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:39.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:39.150 Initialization complete. Launching workers. 00:12:39.150 ======================================================== 00:12:39.150 Latency(us) 00:12:39.150 Device Information : IOPS MiB/s Average min max 00:12:39.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.85 0.08 911110.51 319.41 1005981.03 00:12:39.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.38 0.07 933073.29 229.01 1010323.96 00:12:39.151 ======================================================== 00:12:39.151 Total : 316.23 0.15 921763.32 229.01 1010323.96 00:12:39.151 00:12:39.151 [2024-12-09 23:54:23.465432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1733720 (9): Bad file descriptor 00:12:39.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:39.151 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.151 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:39.151 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 249538 00:12:39.151 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 249538 00:12:39.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (249538) - No such process 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 249538 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 249538 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 249538 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.720 [2024-12-09 23:54:23.990948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.720 23:54:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=250152 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:39.720 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:39.720 [2024-12-09 23:54:24.085143] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:40.287 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:40.287 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:40.288 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:40.546 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:40.546 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:40.546 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:41.117 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:41.117 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:41.117 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:41.684 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:41.684 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:41.684 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.251 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:42.251 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:42.251 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.820 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:42.820 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:42.820 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.820 Initializing NVMe Controllers 00:12:42.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.820 Controller IO queue size 128, less than required. 00:12:42.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:42.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:42.820 Initialization complete. Launching workers. 00:12:42.820 ======================================================== 00:12:42.820 Latency(us) 00:12:42.820 Device Information : IOPS MiB/s Average min max 00:12:42.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001909.09 1000130.49 1041226.48 00:12:42.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003822.68 1000221.37 1010558.17 00:12:42.820 ======================================================== 00:12:42.820 Total : 256.00 0.12 1002865.88 1000130.49 1041226.48 00:12:42.820 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 250152 00:12:43.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (250152) - No such process 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 250152 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.080 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.340 rmmod nvme_tcp 00:12:43.340 rmmod nvme_fabrics 00:12:43.340 rmmod nvme_keyring 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 249285 ']' 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 249285 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 249285 ']' 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 249285 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249285 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249285' 00:12:43.340 killing process with pid 249285 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 249285 00:12:43.340 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 249285 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.600 23:54:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.510 23:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.510 00:12:45.510 real 0m18.260s 00:12:45.510 user 0m30.270s 00:12:45.510 sys 0m7.439s 00:12:45.510 23:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.510 23:54:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.510 ************************************ 00:12:45.510 END TEST nvmf_delete_subsystem 00:12:45.510 ************************************ 00:12:45.510 23:54:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:45.771 23:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.771 23:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.771 23:54:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:45.771 ************************************ 00:12:45.771 START TEST nvmf_host_management 00:12:45.771 ************************************ 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:45.771 * Looking for test storage... 00:12:45.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.771 --rc genhtml_branch_coverage=1 00:12:45.771 --rc genhtml_function_coverage=1 00:12:45.771 --rc genhtml_legend=1 00:12:45.771 --rc geninfo_all_blocks=1 00:12:45.771 --rc geninfo_unexecuted_blocks=1 00:12:45.771 00:12:45.771 ' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.771 --rc genhtml_branch_coverage=1 00:12:45.771 --rc genhtml_function_coverage=1 00:12:45.771 --rc genhtml_legend=1 00:12:45.771 --rc geninfo_all_blocks=1 00:12:45.771 --rc geninfo_unexecuted_blocks=1 00:12:45.771 00:12:45.771 ' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.771 --rc genhtml_branch_coverage=1 00:12:45.771 --rc genhtml_function_coverage=1 00:12:45.771 --rc genhtml_legend=1 00:12:45.771 --rc geninfo_all_blocks=1 00:12:45.771 --rc geninfo_unexecuted_blocks=1 00:12:45.771 00:12:45.771 ' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.771 --rc genhtml_branch_coverage=1 00:12:45.771 --rc genhtml_function_coverage=1 00:12:45.771 --rc genhtml_legend=1 00:12:45.771 --rc geninfo_all_blocks=1 00:12:45.771 --rc geninfo_unexecuted_blocks=1 00:12:45.771 00:12:45.771 ' 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.771 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.772 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.772 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.772 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.772 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.772 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.032 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:46.032 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.033 23:54:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:54.168 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.168 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:54.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:54.169 Found net devices under 0000:af:00.0: cvl_0_0 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:54.169 Found net devices under 0000:af:00.1: cvl_0_1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:12:54.169 00:12:54.169 --- 10.0.0.2 ping statistics --- 00:12:54.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.169 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:54.169 00:12:54.169 --- 10.0.0.1 ping statistics --- 00:12:54.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.169 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=254576 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 254576 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 254576 ']' 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.169 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 [2024-12-09 23:54:37.661625] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:12:54.169 [2024-12-09 23:54:37.661678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.169 [2024-12-09 23:54:37.755820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.169 [2024-12-09 23:54:37.797897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.169 [2024-12-09 23:54:37.797935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.169 [2024-12-09 23:54:37.797945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.169 [2024-12-09 23:54:37.797954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.169 [2024-12-09 23:54:37.797962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.169 [2024-12-09 23:54:37.799788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.169 [2024-12-09 23:54:37.799897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.169 [2024-12-09 23:54:37.799934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.169 [2024-12-09 23:54:37.799935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 [2024-12-09 23:54:38.556022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.169 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 Malloc0 00:12:54.169 [2024-12-09 23:54:38.633924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=254877 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 254877 /var/tmp/bdevperf.sock 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 254877 ']' 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:54.430 { 00:12:54.430 "params": { 00:12:54.430 "name": "Nvme$subsystem", 00:12:54.430 "trtype": "$TEST_TRANSPORT", 00:12:54.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.430 "adrfam": "ipv4", 00:12:54.430 "trsvcid": "$NVMF_PORT", 00:12:54.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.430 "hdgst": ${hdgst:-false}, 00:12:54.430 "ddgst": ${ddgst:-false} 00:12:54.430 }, 00:12:54.430 "method": "bdev_nvme_attach_controller" 00:12:54.430 } 00:12:54.430 EOF 00:12:54.430 )") 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:54.430 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:54.430 "params": { 00:12:54.430 "name": "Nvme0", 00:12:54.430 "trtype": "tcp", 00:12:54.430 "traddr": "10.0.0.2", 00:12:54.430 "adrfam": "ipv4", 00:12:54.430 "trsvcid": "4420", 00:12:54.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:54.430 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:54.430 "hdgst": false, 00:12:54.430 "ddgst": false 00:12:54.430 }, 00:12:54.430 "method": "bdev_nvme_attach_controller" 00:12:54.430 }' 00:12:54.430 [2024-12-09 23:54:38.742480] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:12:54.430 [2024-12-09 23:54:38.742529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254877 ] 00:12:54.430 [2024-12-09 23:54:38.834450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.430 [2024-12-09 23:54:38.873527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.689 Running I/O for 10 seconds... 00:12:55.256 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.256 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.257 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.257 [2024-12-09 23:54:39.638074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.257 [2024-12-09 23:54:39.638665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.257 [2024-12-09 23:54:39.638674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.638983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.638993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:55.258 [2024-12-09 23:54:39.639406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.258 [2024-12-09 23:54:39.639438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:12:55.258 [2024-12-09 23:54:39.640369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:55.258 task offset: 21376 on job bdev=Nvme0n1 fails 00:12:55.258 00:12:55.258 Latency(us) 00:12:55.258 [2024-12-09T22:54:39.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.258 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:55.258 Job: Nvme0n1 ended in about 0.56 seconds with error 00:12:55.258 Verification LBA range: start 0x0 length 0x400 00:12:55.258 Nvme0n1 : 0.56 2093.36 130.84 114.80 0.00 28366.60 1703.94 26109.54 00:12:55.258 [2024-12-09T22:54:39.732Z] =================================================================================================================== 00:12:55.259 [2024-12-09T22:54:39.732Z] Total : 2093.36 130.84 114.80 0.00 28366.60 1703.94 26109.54 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:55.259 [2024-12-09 23:54:39.642663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:55.259 [2024-12-09 23:54:39.642686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cf760 (9): Bad file descriptor 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.259 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:55.259 [2024-12-09 23:54:39.689984] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 254877 00:12:56.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (254877) - No such process 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:56.195 { 00:12:56.195 "params": { 00:12:56.195 "name": "Nvme$subsystem", 00:12:56.195 "trtype": "$TEST_TRANSPORT", 00:12:56.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.195 "adrfam": "ipv4", 00:12:56.195 "trsvcid": "$NVMF_PORT", 00:12:56.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.195 "hdgst": ${hdgst:-false}, 00:12:56.195 "ddgst": ${ddgst:-false} 00:12:56.195 }, 00:12:56.195 "method": "bdev_nvme_attach_controller" 00:12:56.195 } 00:12:56.195 EOF 00:12:56.195 )") 00:12:56.195 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:56.455 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:56.455 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:56.455 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:56.455 "params": { 00:12:56.455 "name": "Nvme0", 00:12:56.455 "trtype": "tcp", 00:12:56.455 "traddr": "10.0.0.2", 00:12:56.455 "adrfam": "ipv4", 00:12:56.455 "trsvcid": "4420", 00:12:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:56.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:56.455 "hdgst": false, 00:12:56.455 "ddgst": false 00:12:56.455 }, 00:12:56.455 "method": "bdev_nvme_attach_controller" 00:12:56.455 }' 00:12:56.455 [2024-12-09 23:54:40.710357] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:12:56.455 [2024-12-09 23:54:40.710409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255159 ] 00:12:56.455 [2024-12-09 23:54:40.801482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.455 [2024-12-09 23:54:40.841659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.714 Running I/O for 1 seconds... 00:12:57.655 2048.00 IOPS, 128.00 MiB/s 00:12:57.655 Latency(us) 00:12:57.655 [2024-12-09T22:54:42.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.655 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:57.655 Verification LBA range: start 0x0 length 0x400 00:12:57.655 Nvme0n1 : 1.02 2070.28 129.39 0.00 0.00 30444.98 4377.80 26214.40 00:12:57.655 [2024-12-09T22:54:42.128Z] =================================================================================================================== 00:12:57.655 [2024-12-09T22:54:42.128Z] Total : 2070.28 129.39 0.00 0.00 30444.98 4377.80 26214.40 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.914 rmmod nvme_tcp 00:12:57.914 rmmod nvme_fabrics 00:12:57.914 rmmod nvme_keyring 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 254576 ']' 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 254576 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 254576 ']' 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 254576 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254576 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254576' 00:12:57.914 killing process with pid 254576 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 254576 00:12:57.914 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 254576 00:12:58.174 [2024-12-09 23:54:42.529656] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.174 23:54:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:00.720 00:13:00.720 real 0m14.617s 00:13:00.720 user 0m23.565s 00:13:00.720 sys 0m6.938s 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:00.720 ************************************ 00:13:00.720 END TEST nvmf_host_management 00:13:00.720 ************************************ 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:00.720 ************************************ 00:13:00.720 START TEST nvmf_lvol 00:13:00.720 ************************************ 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:00.720 * Looking for test storage... 00:13:00.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.720 --rc genhtml_branch_coverage=1 00:13:00.720 --rc genhtml_function_coverage=1 00:13:00.720 --rc genhtml_legend=1 00:13:00.720 --rc geninfo_all_blocks=1 00:13:00.720 --rc geninfo_unexecuted_blocks=1 00:13:00.720 00:13:00.720 ' 00:13:00.720 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.720 --rc genhtml_branch_coverage=1 00:13:00.720 --rc genhtml_function_coverage=1 00:13:00.721 --rc genhtml_legend=1 00:13:00.721 --rc geninfo_all_blocks=1 00:13:00.721 --rc geninfo_unexecuted_blocks=1 00:13:00.721 00:13:00.721 ' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.721 --rc genhtml_branch_coverage=1 00:13:00.721 --rc genhtml_function_coverage=1 00:13:00.721 --rc genhtml_legend=1 00:13:00.721 --rc geninfo_all_blocks=1 00:13:00.721 --rc geninfo_unexecuted_blocks=1 00:13:00.721 00:13:00.721 ' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.721 --rc genhtml_branch_coverage=1 00:13:00.721 --rc genhtml_function_coverage=1 00:13:00.721 --rc genhtml_legend=1 00:13:00.721 --rc geninfo_all_blocks=1 00:13:00.721 --rc geninfo_unexecuted_blocks=1 00:13:00.721 00:13:00.721 ' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.721 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.865 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.865 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.865 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.865 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.865 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:13:08.865 00:13:08.865 --- 10.0.0.2 ping statistics --- 00:13:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.865 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:13:08.865 00:13:08.865 --- 10.0.0.1 ping statistics --- 00:13:08.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.865 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.865 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=259239 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 259239 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 259239 ']' 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.866 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.866 [2024-12-09 23:54:52.362132] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:13:08.866 [2024-12-09 23:54:52.362187] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.866 [2024-12-09 23:54:52.457547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.866 [2024-12-09 23:54:52.495287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.866 [2024-12-09 23:54:52.495325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.866 [2024-12-09 23:54:52.495334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.866 [2024-12-09 23:54:52.495342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.866 [2024-12-09 23:54:52.495349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.866 [2024-12-09 23:54:52.496843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.866 [2024-12-09 23:54:52.496920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.866 [2024-12-09 23:54:52.496921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.866 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:09.126 [2024-12-09 23:54:53.412826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.126 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.389 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:09.389 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.651 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:09.651 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:09.651 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:09.911 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=da79173f-097b-4818-9a84-824b16387f8b 00:13:09.911 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da79173f-097b-4818-9a84-824b16387f8b lvol 20 00:13:10.173 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8e0385b8-8dd8-4d05-827e-e8e02dc0be32 00:13:10.173 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:10.435 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e0385b8-8dd8-4d05-827e-e8e02dc0be32 00:13:10.435 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:10.695 [2024-12-09 23:54:55.073500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.696 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:10.956 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=259788 00:13:10.956 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:10.956 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:11.898 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8e0385b8-8dd8-4d05-827e-e8e02dc0be32 MY_SNAPSHOT 00:13:12.159 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b2006b51-d94e-4bcf-a098-b83be75e26c7 00:13:12.159 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8e0385b8-8dd8-4d05-827e-e8e02dc0be32 30 00:13:12.419 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b2006b51-d94e-4bcf-a098-b83be75e26c7 MY_CLONE 00:13:12.679 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7c6e8915-21cf-49ff-bf20-2cde7a79d0ed 00:13:12.679 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7c6e8915-21cf-49ff-bf20-2cde7a79d0ed 00:13:13.251 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 259788 00:13:21.400 Initializing NVMe Controllers 00:13:21.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:21.400 Controller IO queue size 128, less than required. 00:13:21.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:21.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:21.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:21.400 Initialization complete. Launching workers. 00:13:21.400 ======================================================== 00:13:21.400 Latency(us) 00:13:21.400 Device Information : IOPS MiB/s Average min max 00:13:21.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12481.82 48.76 10257.28 1219.07 66493.06 00:13:21.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12373.43 48.33 10344.71 3535.72 47560.95 00:13:21.400 ======================================================== 00:13:21.400 Total : 24855.25 97.09 10300.80 1219.07 66493.06 00:13:21.400 00:13:21.400 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:21.400 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e0385b8-8dd8-4d05-827e-e8e02dc0be32 00:13:21.660 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da79173f-097b-4818-9a84-824b16387f8b 00:13:21.920 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:21.920 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.921 rmmod nvme_tcp 00:13:21.921 rmmod nvme_fabrics 00:13:21.921 rmmod nvme_keyring 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 259239 ']' 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 259239 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 259239 ']' 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 259239 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.921 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259239 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259239' 00:13:22.181 killing process with pid 259239 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 259239 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 259239 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.181 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.748 00:13:24.748 real 0m23.997s 00:13:24.748 user 1m4.214s 00:13:24.748 sys 0m10.155s 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:24.748 ************************************ 00:13:24.748 END TEST nvmf_lvol 00:13:24.748 ************************************ 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:24.748 ************************************ 00:13:24.748 START TEST nvmf_lvs_grow 00:13:24.748 ************************************ 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:24.748 * Looking for test storage... 00:13:24.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:13:24.748 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.748 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.749 --rc genhtml_branch_coverage=1 00:13:24.749 --rc genhtml_function_coverage=1 00:13:24.749 --rc genhtml_legend=1 00:13:24.749 --rc geninfo_all_blocks=1 00:13:24.749 --rc geninfo_unexecuted_blocks=1 00:13:24.749 00:13:24.749 ' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.749 --rc genhtml_branch_coverage=1 00:13:24.749 --rc genhtml_function_coverage=1 00:13:24.749 --rc genhtml_legend=1 00:13:24.749 --rc geninfo_all_blocks=1 00:13:24.749 --rc geninfo_unexecuted_blocks=1 00:13:24.749 00:13:24.749 ' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.749 --rc genhtml_branch_coverage=1 00:13:24.749 --rc genhtml_function_coverage=1 00:13:24.749 --rc genhtml_legend=1 00:13:24.749 --rc geninfo_all_blocks=1 00:13:24.749 --rc geninfo_unexecuted_blocks=1 00:13:24.749 00:13:24.749 ' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.749 --rc genhtml_branch_coverage=1 00:13:24.749 --rc genhtml_function_coverage=1 00:13:24.749 --rc genhtml_legend=1 00:13:24.749 --rc geninfo_all_blocks=1 00:13:24.749 --rc geninfo_unexecuted_blocks=1 00:13:24.749 00:13:24.749 ' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.749 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.891 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:32.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:32.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:32.892 Found net devices under 0000:af:00.0: cvl_0_0 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:32.892 Found net devices under 0000:af:00.1: cvl_0_1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:13:32.892 00:13:32.892 --- 10.0.0.2 ping statistics --- 00:13:32.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.892 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:13:32.892 00:13:32.892 --- 10.0.0.1 ping statistics --- 00:13:32.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.892 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=265501 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 265501 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 265501 ']' 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.892 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:32.893 [2024-12-09 23:55:16.481316] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:13:32.893 [2024-12-09 23:55:16.481368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.893 [2024-12-09 23:55:16.575798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.893 [2024-12-09 23:55:16.612436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.893 [2024-12-09 23:55:16.612470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.893 [2024-12-09 23:55:16.612480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.893 [2024-12-09 23:55:16.612488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.893 [2024-12-09 23:55:16.612496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.893 [2024-12-09 23:55:16.613074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.893 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.893 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:13:32.893 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.893 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.893 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.154 [2024-12-09 23:55:17.535544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:33.154 ************************************ 00:13:33.154 START TEST lvs_grow_clean 00:13:33.154 ************************************ 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:33.154 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:33.414 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:33.414 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:33.675 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:33.675 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:33.675 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u acbbc10e-c0de-44ee-8c94-f643eb926faa lvol 150 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=21b76e8a-aa92-48bd-ae33-6cfe35e10e27 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:33.935 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:34.196 [2024-12-09 23:55:18.578731] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:34.196 [2024-12-09 23:55:18.578785] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:34.196 true 00:13:34.196 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:34.196 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:34.456 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:34.456 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:34.717 23:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21b76e8a-aa92-48bd-ae33-6cfe35e10e27 00:13:34.717 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:34.978 [2024-12-09 23:55:19.325020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.978 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=266077 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 266077 /var/tmp/bdevperf.sock 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 266077 ']' 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.240 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:35.240 [2024-12-09 23:55:19.559138] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:13:35.240 [2024-12-09 23:55:19.559186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266077 ] 00:13:35.240 [2024-12-09 23:55:19.647930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.240 [2024-12-09 23:55:19.687704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.185 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.185 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:13:36.185 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:36.446 Nvme0n1 00:13:36.446 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:36.707 [ 00:13:36.707 { 00:13:36.707 "name": "Nvme0n1", 00:13:36.707 "aliases": [ 00:13:36.707 "21b76e8a-aa92-48bd-ae33-6cfe35e10e27" 00:13:36.707 ], 00:13:36.707 "product_name": "NVMe disk", 00:13:36.707 "block_size": 4096, 00:13:36.707 "num_blocks": 38912, 00:13:36.707 "uuid": "21b76e8a-aa92-48bd-ae33-6cfe35e10e27", 00:13:36.707 "numa_id": 1, 00:13:36.707 "assigned_rate_limits": { 00:13:36.707 "rw_ios_per_sec": 0, 00:13:36.707 "rw_mbytes_per_sec": 0, 00:13:36.707 "r_mbytes_per_sec": 0, 00:13:36.708 "w_mbytes_per_sec": 0 00:13:36.708 }, 00:13:36.708 "claimed": false, 00:13:36.708 "zoned": false, 00:13:36.708 "supported_io_types": { 00:13:36.708 "read": true, 00:13:36.708 "write": true, 00:13:36.708 "unmap": true, 00:13:36.708 "flush": true, 00:13:36.708 "reset": true, 00:13:36.708 "nvme_admin": true, 00:13:36.708 "nvme_io": true, 00:13:36.708 "nvme_io_md": false, 00:13:36.708 "write_zeroes": true, 00:13:36.708 "zcopy": false, 00:13:36.708 "get_zone_info": false, 00:13:36.708 "zone_management": false, 00:13:36.708 "zone_append": false, 00:13:36.708 "compare": true, 00:13:36.708 "compare_and_write": true, 00:13:36.708 "abort": true, 00:13:36.708 "seek_hole": false, 00:13:36.708 "seek_data": false, 00:13:36.708 "copy": true, 00:13:36.708 "nvme_iov_md": false 00:13:36.708 }, 00:13:36.708 "memory_domains": [ 00:13:36.708 { 00:13:36.708 "dma_device_id": "system", 00:13:36.708 "dma_device_type": 1 00:13:36.708 } 00:13:36.708 ], 00:13:36.708 "driver_specific": { 00:13:36.708 "nvme": [ 00:13:36.708 { 00:13:36.708 "trid": { 00:13:36.708 "trtype": "TCP", 00:13:36.708 "adrfam": "IPv4", 00:13:36.708 "traddr": "10.0.0.2", 00:13:36.708 "trsvcid": "4420", 00:13:36.708 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:36.708 }, 00:13:36.708 "ctrlr_data": { 00:13:36.708 "cntlid": 1, 00:13:36.708 "vendor_id": "0x8086", 00:13:36.708 "model_number": "SPDK bdev Controller", 00:13:36.708 "serial_number": "SPDK0", 00:13:36.708 "firmware_revision": "25.01", 00:13:36.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:36.708 "oacs": { 00:13:36.708 "security": 0, 00:13:36.708 "format": 0, 00:13:36.708 "firmware": 0, 00:13:36.708 "ns_manage": 0 00:13:36.708 }, 00:13:36.708 "multi_ctrlr": true, 00:13:36.708 "ana_reporting": false 00:13:36.708 }, 00:13:36.708 "vs": { 00:13:36.708 "nvme_version": "1.3" 00:13:36.708 }, 00:13:36.708 "ns_data": { 00:13:36.708 "id": 1, 00:13:36.708 "can_share": true 00:13:36.708 } 00:13:36.708 } 00:13:36.708 ], 00:13:36.708 "mp_policy": "active_passive" 00:13:36.708 } 00:13:36.708 } 00:13:36.708 ] 00:13:36.708 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:36.708 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=266342 00:13:36.708 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:36.708 Running I/O for 10 seconds... 00:13:37.651 Latency(us) 00:13:37.651 [2024-12-09T22:55:22.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.651 Nvme0n1 : 1.00 23899.00 93.36 0.00 0.00 0.00 0.00 0.00 00:13:37.651 [2024-12-09T22:55:22.124Z] =================================================================================================================== 00:13:37.651 [2024-12-09T22:55:22.124Z] Total : 23899.00 93.36 0.00 0.00 0.00 0.00 0.00 00:13:37.651 00:13:38.594 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:38.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.855 Nvme0n1 : 2.00 24084.00 94.08 0.00 0.00 0.00 0.00 0.00 00:13:38.855 [2024-12-09T22:55:23.328Z] =================================================================================================================== 00:13:38.855 [2024-12-09T22:55:23.328Z] Total : 24084.00 94.08 0.00 0.00 0.00 0.00 0.00 00:13:38.855 00:13:38.855 true 00:13:38.855 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:38.855 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:39.116 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:39.116 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:39.116 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 266342 00:13:39.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.688 Nvme0n1 : 3.00 24078.33 94.06 0.00 0.00 0.00 0.00 0.00 00:13:39.688 [2024-12-09T22:55:24.161Z] =================================================================================================================== 00:13:39.688 [2024-12-09T22:55:24.161Z] Total : 24078.33 94.06 0.00 0.00 0.00 0.00 0.00 00:13:39.688 00:13:40.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.660 Nvme0n1 : 4.00 24129.50 94.26 0.00 0.00 0.00 0.00 0.00 00:13:40.660 [2024-12-09T22:55:25.133Z] =================================================================================================================== 00:13:40.660 [2024-12-09T22:55:25.133Z] Total : 24129.50 94.26 0.00 0.00 0.00 0.00 0.00 00:13:40.660 00:13:42.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.043 Nvme0n1 : 5.00 24173.20 94.43 0.00 0.00 0.00 0.00 0.00 00:13:42.043 [2024-12-09T22:55:26.516Z] =================================================================================================================== 00:13:42.043 [2024-12-09T22:55:26.516Z] Total : 24173.20 94.43 0.00 0.00 0.00 0.00 0.00 00:13:42.043 00:13:42.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.983 Nvme0n1 : 6.00 24222.00 94.62 0.00 0.00 0.00 0.00 0.00 00:13:42.983 [2024-12-09T22:55:27.456Z] =================================================================================================================== 00:13:42.983 [2024-12-09T22:55:27.456Z] Total : 24222.00 94.62 0.00 0.00 0.00 0.00 0.00 00:13:42.983 00:13:43.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.924 Nvme0n1 : 7.00 24256.57 94.75 0.00 0.00 0.00 0.00 0.00 00:13:43.924 [2024-12-09T22:55:28.397Z] =================================================================================================================== 00:13:43.924 [2024-12-09T22:55:28.397Z] Total : 24256.57 94.75 0.00 0.00 0.00 0.00 0.00 00:13:43.924 00:13:44.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.865 Nvme0n1 : 8.00 24283.62 94.86 0.00 0.00 0.00 0.00 0.00 00:13:44.865 [2024-12-09T22:55:29.338Z] =================================================================================================================== 00:13:44.865 [2024-12-09T22:55:29.338Z] Total : 24283.62 94.86 0.00 0.00 0.00 0.00 0.00 00:13:44.865 00:13:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.804 Nvme0n1 : 9.00 24297.00 94.91 0.00 0.00 0.00 0.00 0.00 00:13:45.804 [2024-12-09T22:55:30.277Z] =================================================================================================================== 00:13:45.804 [2024-12-09T22:55:30.277Z] Total : 24297.00 94.91 0.00 0.00 0.00 0.00 0.00 00:13:45.804 00:13:46.746 00:13:46.746 Latency(us) 00:13:46.746 [2024-12-09T22:55:31.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.746 Nvme0n1 : 10.00 24291.54 94.89 0.00 0.00 5266.46 1913.65 9699.33 00:13:46.746 [2024-12-09T22:55:31.219Z] =================================================================================================================== 00:13:46.746 [2024-12-09T22:55:31.219Z] Total : 24291.54 94.89 0.00 0.00 5266.46 1913.65 9699.33 00:13:46.746 { 00:13:46.746 "results": [ 00:13:46.746 { 00:13:46.746 "job": "Nvme0n1", 00:13:46.746 "core_mask": "0x2", 00:13:46.746 "workload": "randwrite", 00:13:46.746 "status": "finished", 00:13:46.746 "queue_depth": 128, 00:13:46.746 "io_size": 4096, 00:13:46.746 "runtime": 10.00122, 00:13:46.746 "iops": 24291.53643255523, 00:13:46.746 "mibps": 94.88881418966886, 00:13:46.746 "io_failed": 0, 00:13:46.746 "io_timeout": 0, 00:13:46.746 "avg_latency_us": 5266.457529926526, 00:13:46.746 "min_latency_us": 1913.6512, 00:13:46.746 "max_latency_us": 9699.328 00:13:46.746 } 00:13:46.746 ], 00:13:46.746 "core_count": 1 00:13:46.746 } 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 266077 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 266077 ']' 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 266077 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.746 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 266077 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 266077' 00:13:47.007 killing process with pid 266077 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 266077 00:13:47.007 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.007 00:13:47.007 Latency(us) 00:13:47.007 [2024-12-09T22:55:31.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.007 [2024-12-09T22:55:31.480Z] =================================================================================================================== 00:13:47.007 [2024-12-09T22:55:31.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 266077 00:13:47.007 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.268 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:47.529 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:47.529 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:47.529 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:47.529 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:47.529 23:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:47.789 [2024-12-09 23:55:32.152615] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.789 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:47.790 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:48.050 request: 00:13:48.050 { 00:13:48.050 "uuid": "acbbc10e-c0de-44ee-8c94-f643eb926faa", 00:13:48.050 "method": "bdev_lvol_get_lvstores", 00:13:48.050 "req_id": 1 00:13:48.050 } 00:13:48.050 Got JSON-RPC error response 00:13:48.050 response: 00:13:48.050 { 00:13:48.050 "code": -19, 00:13:48.050 "message": "No such device" 00:13:48.050 } 00:13:48.050 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:13:48.050 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.050 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.050 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.050 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:48.311 aio_bdev 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 21b76e8a-aa92-48bd-ae33-6cfe35e10e27 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=21b76e8a-aa92-48bd-ae33-6cfe35e10e27 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:48.311 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21b76e8a-aa92-48bd-ae33-6cfe35e10e27 -t 2000 00:13:48.572 [ 00:13:48.572 { 00:13:48.572 "name": "21b76e8a-aa92-48bd-ae33-6cfe35e10e27", 00:13:48.572 "aliases": [ 00:13:48.572 "lvs/lvol" 00:13:48.572 ], 00:13:48.572 "product_name": "Logical Volume", 00:13:48.572 "block_size": 4096, 00:13:48.572 "num_blocks": 38912, 00:13:48.572 "uuid": "21b76e8a-aa92-48bd-ae33-6cfe35e10e27", 00:13:48.572 "assigned_rate_limits": { 00:13:48.572 "rw_ios_per_sec": 0, 00:13:48.572 "rw_mbytes_per_sec": 0, 00:13:48.572 "r_mbytes_per_sec": 0, 00:13:48.572 "w_mbytes_per_sec": 0 00:13:48.572 }, 00:13:48.572 "claimed": false, 00:13:48.572 "zoned": false, 00:13:48.572 "supported_io_types": { 00:13:48.572 "read": true, 00:13:48.572 "write": true, 00:13:48.572 "unmap": true, 00:13:48.572 "flush": false, 00:13:48.572 "reset": true, 00:13:48.572 "nvme_admin": false, 00:13:48.572 "nvme_io": false, 00:13:48.572 "nvme_io_md": false, 00:13:48.572 "write_zeroes": true, 00:13:48.572 "zcopy": false, 00:13:48.572 "get_zone_info": false, 00:13:48.572 "zone_management": false, 00:13:48.572 "zone_append": false, 00:13:48.572 "compare": false, 00:13:48.572 "compare_and_write": false, 00:13:48.572 "abort": false, 00:13:48.572 "seek_hole": true, 00:13:48.572 "seek_data": true, 00:13:48.572 "copy": false, 00:13:48.572 "nvme_iov_md": false 00:13:48.572 }, 00:13:48.572 "driver_specific": { 00:13:48.572 "lvol": { 00:13:48.572 "lvol_store_uuid": "acbbc10e-c0de-44ee-8c94-f643eb926faa", 00:13:48.572 "base_bdev": "aio_bdev", 00:13:48.572 "thin_provision": false, 00:13:48.572 "num_allocated_clusters": 38, 00:13:48.572 "snapshot": false, 00:13:48.572 "clone": false, 00:13:48.572 "esnap_clone": false 00:13:48.572 } 00:13:48.572 } 00:13:48.572 } 00:13:48.572 ] 00:13:48.572 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:13:48.572 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:48.572 23:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:48.832 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:48.832 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:48.832 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:49.092 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:49.092 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21b76e8a-aa92-48bd-ae33-6cfe35e10e27 00:13:49.092 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acbbc10e-c0de-44ee-8c94-f643eb926faa 00:13:49.352 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:49.613 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:49.613 00:13:49.613 real 0m16.319s 00:13:49.613 user 0m15.609s 00:13:49.613 sys 0m1.969s 00:13:49.613 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.613 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:49.613 ************************************ 00:13:49.614 END TEST lvs_grow_clean 00:13:49.614 ************************************ 00:13:49.614 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:49.614 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.614 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.614 23:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:49.614 ************************************ 00:13:49.614 START TEST lvs_grow_dirty 00:13:49.614 ************************************ 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:49.614 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:49.874 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:49.874 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:50.135 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:13:50.135 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:13:50.135 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e lvol 150 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:50.396 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:50.657 [2024-12-09 23:55:34.968655] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:50.657 [2024-12-09 23:55:34.968710] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:50.657 true 00:13:50.657 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:13:50.657 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:50.917 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:50.917 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:50.917 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:13:51.178 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:51.438 [2024-12-09 23:55:35.686804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=268991 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 268991 /var/tmp/bdevperf.sock 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 268991 ']' 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.438 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:51.698 [2024-12-09 23:55:35.936092] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:13:51.698 [2024-12-09 23:55:35.936154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268991 ] 00:13:51.698 [2024-12-09 23:55:36.026555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.698 [2024-12-09 23:55:36.066595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.637 23:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.637 23:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:52.637 23:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:52.637 Nvme0n1 00:13:52.637 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:52.898 [ 00:13:52.898 { 00:13:52.898 "name": "Nvme0n1", 00:13:52.898 "aliases": [ 00:13:52.898 "5cabf13c-2a38-48e1-8a64-13a8eb163fed" 00:13:52.898 ], 00:13:52.898 "product_name": "NVMe disk", 00:13:52.898 "block_size": 4096, 00:13:52.898 "num_blocks": 38912, 00:13:52.898 "uuid": "5cabf13c-2a38-48e1-8a64-13a8eb163fed", 00:13:52.898 "numa_id": 1, 00:13:52.898 "assigned_rate_limits": { 00:13:52.898 "rw_ios_per_sec": 0, 00:13:52.898 "rw_mbytes_per_sec": 0, 00:13:52.898 "r_mbytes_per_sec": 0, 00:13:52.898 "w_mbytes_per_sec": 0 00:13:52.898 }, 00:13:52.898 "claimed": false, 00:13:52.898 "zoned": false, 00:13:52.898 "supported_io_types": { 00:13:52.898 "read": true, 00:13:52.898 "write": true, 00:13:52.898 "unmap": true, 00:13:52.898 "flush": true, 00:13:52.898 "reset": true, 00:13:52.898 "nvme_admin": true, 00:13:52.898 "nvme_io": true, 00:13:52.898 "nvme_io_md": false, 00:13:52.898 "write_zeroes": true, 00:13:52.898 "zcopy": false, 00:13:52.898 "get_zone_info": false, 00:13:52.898 "zone_management": false, 00:13:52.898 "zone_append": false, 00:13:52.898 "compare": true, 00:13:52.898 "compare_and_write": true, 00:13:52.898 "abort": true, 00:13:52.898 "seek_hole": false, 00:13:52.898 "seek_data": false, 00:13:52.898 "copy": true, 00:13:52.898 "nvme_iov_md": false 00:13:52.898 }, 00:13:52.898 "memory_domains": [ 00:13:52.898 { 00:13:52.898 "dma_device_id": "system", 00:13:52.898 "dma_device_type": 1 00:13:52.898 } 00:13:52.898 ], 00:13:52.898 "driver_specific": { 00:13:52.898 "nvme": [ 00:13:52.898 { 00:13:52.898 "trid": { 00:13:52.898 "trtype": "TCP", 00:13:52.898 "adrfam": "IPv4", 00:13:52.898 "traddr": "10.0.0.2", 00:13:52.898 "trsvcid": "4420", 00:13:52.898 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:52.898 }, 00:13:52.898 "ctrlr_data": { 00:13:52.898 "cntlid": 1, 00:13:52.898 "vendor_id": "0x8086", 00:13:52.898 "model_number": "SPDK bdev Controller", 00:13:52.898 "serial_number": "SPDK0", 00:13:52.898 "firmware_revision": "25.01", 00:13:52.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.898 "oacs": { 00:13:52.898 "security": 0, 00:13:52.898 "format": 0, 00:13:52.898 "firmware": 0, 00:13:52.898 "ns_manage": 0 00:13:52.898 }, 00:13:52.898 "multi_ctrlr": true, 00:13:52.898 "ana_reporting": false 00:13:52.898 }, 00:13:52.898 "vs": { 00:13:52.898 "nvme_version": "1.3" 00:13:52.898 }, 00:13:52.898 "ns_data": { 00:13:52.898 "id": 1, 00:13:52.898 "can_share": true 00:13:52.898 } 00:13:52.898 } 00:13:52.898 ], 00:13:52.898 "mp_policy": "active_passive" 00:13:52.898 } 00:13:52.898 } 00:13:52.898 ] 00:13:52.898 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:52.898 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=269175 00:13:52.898 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:52.898 Running I/O for 10 seconds... 00:13:54.282 Latency(us) 00:13:54.282 [2024-12-09T22:55:38.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.282 Nvme0n1 : 1.00 24041.00 93.91 0.00 0.00 0.00 0.00 0.00 00:13:54.282 [2024-12-09T22:55:38.755Z] =================================================================================================================== 00:13:54.282 [2024-12-09T22:55:38.755Z] Total : 24041.00 93.91 0.00 0.00 0.00 0.00 0.00 00:13:54.282 00:13:54.853 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:13:55.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.113 Nvme0n1 : 2.00 24211.00 94.57 0.00 0.00 0.00 0.00 0.00 00:13:55.113 [2024-12-09T22:55:39.586Z] =================================================================================================================== 00:13:55.113 [2024-12-09T22:55:39.586Z] Total : 24211.00 94.57 0.00 0.00 0.00 0.00 0.00 00:13:55.113 00:13:55.113 true 00:13:55.113 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:13:55.113 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:55.374 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:55.374 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:55.374 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 269175 00:13:55.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.945 Nvme0n1 : 3.00 24175.67 94.44 0.00 0.00 0.00 0.00 0.00 00:13:55.945 [2024-12-09T22:55:40.418Z] =================================================================================================================== 00:13:55.945 [2024-12-09T22:55:40.418Z] Total : 24175.67 94.44 0.00 0.00 0.00 0.00 0.00 00:13:55.945 00:13:56.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.896 Nvme0n1 : 4.00 24173.25 94.43 0.00 0.00 0.00 0.00 0.00 00:13:56.896 [2024-12-09T22:55:41.369Z] =================================================================================================================== 00:13:56.896 [2024-12-09T22:55:41.369Z] Total : 24173.25 94.43 0.00 0.00 0.00 0.00 0.00 00:13:56.896 00:13:58.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.281 Nvme0n1 : 5.00 24250.80 94.73 0.00 0.00 0.00 0.00 0.00 00:13:58.281 [2024-12-09T22:55:42.754Z] =================================================================================================================== 00:13:58.281 [2024-12-09T22:55:42.754Z] Total : 24250.80 94.73 0.00 0.00 0.00 0.00 0.00 00:13:58.281 00:13:59.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.222 Nvme0n1 : 6.00 24311.67 94.97 0.00 0.00 0.00 0.00 0.00 00:13:59.222 [2024-12-09T22:55:43.695Z] =================================================================================================================== 00:13:59.222 [2024-12-09T22:55:43.695Z] Total : 24311.67 94.97 0.00 0.00 0.00 0.00 0.00 00:13:59.222 00:14:00.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.162 Nvme0n1 : 7.00 24362.43 95.17 0.00 0.00 0.00 0.00 0.00 00:14:00.162 [2024-12-09T22:55:44.635Z] =================================================================================================================== 00:14:00.162 [2024-12-09T22:55:44.635Z] Total : 24362.43 95.17 0.00 0.00 0.00 0.00 0.00 00:14:00.162 00:14:01.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.107 Nvme0n1 : 8.00 24386.75 95.26 0.00 0.00 0.00 0.00 0.00 00:14:01.107 [2024-12-09T22:55:45.580Z] =================================================================================================================== 00:14:01.107 [2024-12-09T22:55:45.580Z] Total : 24386.75 95.26 0.00 0.00 0.00 0.00 0.00 00:14:01.107 00:14:02.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.051 Nvme0n1 : 9.00 24411.33 95.36 0.00 0.00 0.00 0.00 0.00 00:14:02.051 [2024-12-09T22:55:46.524Z] =================================================================================================================== 00:14:02.051 [2024-12-09T22:55:46.524Z] Total : 24411.33 95.36 0.00 0.00 0.00 0.00 0.00 00:14:02.051 00:14:02.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.991 Nvme0n1 : 10.00 24434.80 95.45 0.00 0.00 0.00 0.00 0.00 00:14:02.991 [2024-12-09T22:55:47.464Z] =================================================================================================================== 00:14:02.991 [2024-12-09T22:55:47.464Z] Total : 24434.80 95.45 0.00 0.00 0.00 0.00 0.00 00:14:02.991 00:14:02.991 00:14:02.991 Latency(us) 00:14:02.991 [2024-12-09T22:55:47.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.991 Nvme0n1 : 10.00 24433.97 95.45 0.00 0.00 5235.66 3093.30 11534.34 00:14:02.991 [2024-12-09T22:55:47.464Z] =================================================================================================================== 00:14:02.991 [2024-12-09T22:55:47.464Z] Total : 24433.97 95.45 0.00 0.00 5235.66 3093.30 11534.34 00:14:02.991 { 00:14:02.991 "results": [ 00:14:02.991 { 00:14:02.991 "job": "Nvme0n1", 00:14:02.991 "core_mask": "0x2", 00:14:02.991 "workload": "randwrite", 00:14:02.991 "status": "finished", 00:14:02.991 "queue_depth": 128, 00:14:02.991 "io_size": 4096, 00:14:02.991 "runtime": 10.00296, 00:14:02.991 "iops": 24433.9675456065, 00:14:02.991 "mibps": 95.4451857250254, 00:14:02.991 "io_failed": 0, 00:14:02.991 "io_timeout": 0, 00:14:02.991 "avg_latency_us": 5235.658576973307, 00:14:02.991 "min_latency_us": 3093.2992, 00:14:02.991 "max_latency_us": 11534.336 00:14:02.991 } 00:14:02.991 ], 00:14:02.991 "core_count": 1 00:14:02.991 } 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 268991 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 268991 ']' 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 268991 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268991 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268991' 00:14:02.991 killing process with pid 268991 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 268991 00:14:02.991 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.991 00:14:02.991 Latency(us) 00:14:02.991 [2024-12-09T22:55:47.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.991 [2024-12-09T22:55:47.464Z] =================================================================================================================== 00:14:02.991 [2024-12-09T22:55:47.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.991 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 268991 00:14:03.251 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.512 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 265501 00:14:03.773 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 265501 00:14:04.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 265501 Killed "${NVMF_APP[@]}" "$@" 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=271102 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 271102 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 271102 ']' 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.034 23:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:04.034 [2024-12-09 23:55:48.325354] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:04.034 [2024-12-09 23:55:48.325403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.034 [2024-12-09 23:55:48.418363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.034 [2024-12-09 23:55:48.456296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.034 [2024-12-09 23:55:48.456332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.034 [2024-12-09 23:55:48.456342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.034 [2024-12-09 23:55:48.456350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.034 [2024-12-09 23:55:48.456357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.034 [2024-12-09 23:55:48.456924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:04.974 [2024-12-09 23:55:49.369755] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:04.974 [2024-12-09 23:55:49.369844] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:04.974 [2024-12-09 23:55:49.369870] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.974 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:05.234 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5cabf13c-2a38-48e1-8a64-13a8eb163fed -t 2000 00:14:05.495 [ 00:14:05.495 { 00:14:05.495 "name": "5cabf13c-2a38-48e1-8a64-13a8eb163fed", 00:14:05.495 "aliases": [ 00:14:05.495 "lvs/lvol" 00:14:05.495 ], 00:14:05.495 "product_name": "Logical Volume", 00:14:05.495 "block_size": 4096, 00:14:05.495 "num_blocks": 38912, 00:14:05.495 "uuid": "5cabf13c-2a38-48e1-8a64-13a8eb163fed", 00:14:05.495 "assigned_rate_limits": { 00:14:05.495 "rw_ios_per_sec": 0, 00:14:05.495 "rw_mbytes_per_sec": 0, 00:14:05.495 "r_mbytes_per_sec": 0, 00:14:05.495 "w_mbytes_per_sec": 0 00:14:05.495 }, 00:14:05.495 "claimed": false, 00:14:05.495 "zoned": false, 00:14:05.495 "supported_io_types": { 00:14:05.495 "read": true, 00:14:05.495 "write": true, 00:14:05.496 "unmap": true, 00:14:05.496 "flush": false, 00:14:05.496 "reset": true, 00:14:05.496 "nvme_admin": false, 00:14:05.496 "nvme_io": false, 00:14:05.496 "nvme_io_md": false, 00:14:05.496 "write_zeroes": true, 00:14:05.496 "zcopy": false, 00:14:05.496 "get_zone_info": false, 00:14:05.496 "zone_management": false, 00:14:05.496 "zone_append": false, 00:14:05.496 "compare": false, 00:14:05.496 "compare_and_write": false, 00:14:05.496 "abort": false, 00:14:05.496 "seek_hole": true, 00:14:05.496 "seek_data": true, 00:14:05.496 "copy": false, 00:14:05.496 "nvme_iov_md": false 00:14:05.496 }, 00:14:05.496 "driver_specific": { 00:14:05.496 "lvol": { 00:14:05.496 "lvol_store_uuid": "724075fa-8db5-48aa-8ab4-5e04e52fb22e", 00:14:05.496 "base_bdev": "aio_bdev", 00:14:05.496 "thin_provision": false, 00:14:05.496 "num_allocated_clusters": 38, 00:14:05.496 "snapshot": false, 00:14:05.496 "clone": false, 00:14:05.496 "esnap_clone": false 00:14:05.496 } 00:14:05.496 } 00:14:05.496 } 00:14:05.496 ] 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:05.496 23:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:05.756 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:05.756 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:06.016 [2024-12-09 23:55:50.310641] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:06.016 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:06.277 request: 00:14:06.277 { 00:14:06.277 "uuid": "724075fa-8db5-48aa-8ab4-5e04e52fb22e", 00:14:06.277 "method": "bdev_lvol_get_lvstores", 00:14:06.277 "req_id": 1 00:14:06.277 } 00:14:06.277 Got JSON-RPC error response 00:14:06.277 response: 00:14:06.277 { 00:14:06.277 "code": -19, 00:14:06.277 "message": "No such device" 00:14:06.277 } 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:06.277 aio_bdev 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:06.277 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:06.538 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5cabf13c-2a38-48e1-8a64-13a8eb163fed -t 2000 00:14:06.798 [ 00:14:06.798 { 00:14:06.798 "name": "5cabf13c-2a38-48e1-8a64-13a8eb163fed", 00:14:06.798 "aliases": [ 00:14:06.798 "lvs/lvol" 00:14:06.798 ], 00:14:06.798 "product_name": "Logical Volume", 00:14:06.798 "block_size": 4096, 00:14:06.798 "num_blocks": 38912, 00:14:06.798 "uuid": "5cabf13c-2a38-48e1-8a64-13a8eb163fed", 00:14:06.798 "assigned_rate_limits": { 00:14:06.798 "rw_ios_per_sec": 0, 00:14:06.798 "rw_mbytes_per_sec": 0, 00:14:06.798 "r_mbytes_per_sec": 0, 00:14:06.798 "w_mbytes_per_sec": 0 00:14:06.798 }, 00:14:06.798 "claimed": false, 00:14:06.798 "zoned": false, 00:14:06.798 "supported_io_types": { 00:14:06.798 "read": true, 00:14:06.798 "write": true, 00:14:06.798 "unmap": true, 00:14:06.798 "flush": false, 00:14:06.798 "reset": true, 00:14:06.798 "nvme_admin": false, 00:14:06.798 "nvme_io": false, 00:14:06.798 "nvme_io_md": false, 00:14:06.798 "write_zeroes": true, 00:14:06.798 "zcopy": false, 00:14:06.798 "get_zone_info": false, 00:14:06.798 "zone_management": false, 00:14:06.798 "zone_append": false, 00:14:06.798 "compare": false, 00:14:06.798 "compare_and_write": false, 00:14:06.798 "abort": false, 00:14:06.798 "seek_hole": true, 00:14:06.798 "seek_data": true, 00:14:06.798 "copy": false, 00:14:06.798 "nvme_iov_md": false 00:14:06.798 }, 00:14:06.798 "driver_specific": { 00:14:06.798 "lvol": { 00:14:06.798 "lvol_store_uuid": "724075fa-8db5-48aa-8ab4-5e04e52fb22e", 00:14:06.798 "base_bdev": "aio_bdev", 00:14:06.798 "thin_provision": false, 00:14:06.798 "num_allocated_clusters": 38, 00:14:06.798 "snapshot": false, 00:14:06.798 "clone": false, 00:14:06.798 "esnap_clone": false 00:14:06.798 } 00:14:06.798 } 00:14:06.798 } 00:14:06.798 ] 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:06.798 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:07.059 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:07.059 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5cabf13c-2a38-48e1-8a64-13a8eb163fed 00:14:07.320 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 724075fa-8db5-48aa-8ab4-5e04e52fb22e 00:14:07.581 23:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:07.843 00:14:07.843 real 0m18.097s 00:14:07.843 user 0m45.677s 00:14:07.843 sys 0m4.654s 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:07.843 ************************************ 00:14:07.843 END TEST lvs_grow_dirty 00:14:07.843 ************************************ 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:07.843 nvmf_trace.0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.843 rmmod nvme_tcp 00:14:07.843 rmmod nvme_fabrics 00:14:07.843 rmmod nvme_keyring 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 271102 ']' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 271102 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 271102 ']' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 271102 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.843 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271102 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271102' 00:14:08.104 killing process with pid 271102 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 271102 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 271102 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.104 23:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.655 00:14:10.655 real 0m45.764s 00:14:10.655 user 1m8.037s 00:14:10.655 sys 0m12.784s 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:10.655 ************************************ 00:14:10.655 END TEST nvmf_lvs_grow 00:14:10.655 ************************************ 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:10.655 ************************************ 00:14:10.655 START TEST nvmf_bdev_io_wait 00:14:10.655 ************************************ 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:10.655 * Looking for test storage... 00:14:10.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.655 --rc genhtml_branch_coverage=1 00:14:10.655 --rc genhtml_function_coverage=1 00:14:10.655 --rc genhtml_legend=1 00:14:10.655 --rc geninfo_all_blocks=1 00:14:10.655 --rc geninfo_unexecuted_blocks=1 00:14:10.655 00:14:10.655 ' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.655 --rc genhtml_branch_coverage=1 00:14:10.655 --rc genhtml_function_coverage=1 00:14:10.655 --rc genhtml_legend=1 00:14:10.655 --rc geninfo_all_blocks=1 00:14:10.655 --rc geninfo_unexecuted_blocks=1 00:14:10.655 00:14:10.655 ' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.655 --rc genhtml_branch_coverage=1 00:14:10.655 --rc genhtml_function_coverage=1 00:14:10.655 --rc genhtml_legend=1 00:14:10.655 --rc geninfo_all_blocks=1 00:14:10.655 --rc geninfo_unexecuted_blocks=1 00:14:10.655 00:14:10.655 ' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:10.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.655 --rc genhtml_branch_coverage=1 00:14:10.655 --rc genhtml_function_coverage=1 00:14:10.655 --rc genhtml_legend=1 00:14:10.655 --rc geninfo_all_blocks=1 00:14:10.655 --rc geninfo_unexecuted_blocks=1 00:14:10.655 00:14:10.655 ' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.655 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.656 23:55:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:18.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:18.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.789 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:18.790 Found net devices under 0000:af:00.0: cvl_0_0 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:18.790 Found net devices under 0000:af:00.1: cvl_0_1 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.790 23:56:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:14:18.790 00:14:18.790 --- 10.0.0.2 ping statistics --- 00:14:18.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.790 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:18.790 00:14:18.790 --- 10.0.0.1 ping statistics --- 00:14:18.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.790 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=275612 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 275612 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 275612 ']' 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.790 23:56:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.790 [2024-12-09 23:56:02.323407] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:18.790 [2024-12-09 23:56:02.323464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.790 [2024-12-09 23:56:02.420231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.790 [2024-12-09 23:56:02.462203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.790 [2024-12-09 23:56:02.462243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.790 [2024-12-09 23:56:02.462253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.790 [2024-12-09 23:56:02.462262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.790 [2024-12-09 23:56:02.462269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.790 [2024-12-09 23:56:02.464039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.790 [2024-12-09 23:56:02.464153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.790 [2024-12-09 23:56:02.465057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.790 [2024-12-09 23:56:02.465057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.790 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 [2024-12-09 23:56:03.283094] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 Malloc0 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:19.052 [2024-12-09 23:56:03.333489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=275765 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=275767 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:19.052 { 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme$subsystem", 00:14:19.052 "trtype": "$TEST_TRANSPORT", 00:14:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "$NVMF_PORT", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.052 "hdgst": ${hdgst:-false}, 00:14:19.052 "ddgst": ${ddgst:-false} 00:14:19.052 }, 00:14:19.052 "method": "bdev_nvme_attach_controller" 00:14:19.052 } 00:14:19.052 EOF 00:14:19.052 )") 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=275769 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:19.052 { 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme$subsystem", 00:14:19.052 "trtype": "$TEST_TRANSPORT", 00:14:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "$NVMF_PORT", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.052 "hdgst": ${hdgst:-false}, 00:14:19.052 "ddgst": ${ddgst:-false} 00:14:19.052 }, 00:14:19.052 "method": "bdev_nvme_attach_controller" 00:14:19.052 } 00:14:19.052 EOF 00:14:19.052 )") 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=275772 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:19.052 { 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme$subsystem", 00:14:19.052 "trtype": "$TEST_TRANSPORT", 00:14:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "$NVMF_PORT", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.052 "hdgst": ${hdgst:-false}, 00:14:19.052 "ddgst": ${ddgst:-false} 00:14:19.052 }, 00:14:19.052 "method": "bdev_nvme_attach_controller" 00:14:19.052 } 00:14:19.052 EOF 00:14:19.052 )") 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:19.052 { 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme$subsystem", 00:14:19.052 "trtype": "$TEST_TRANSPORT", 00:14:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "$NVMF_PORT", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.052 "hdgst": ${hdgst:-false}, 00:14:19.052 "ddgst": ${ddgst:-false} 00:14:19.052 }, 00:14:19.052 "method": "bdev_nvme_attach_controller" 00:14:19.052 } 00:14:19.052 EOF 00:14:19.052 )") 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 275765 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme1", 00:14:19.052 "trtype": "tcp", 00:14:19.052 "traddr": "10.0.0.2", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "4420", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.052 "hdgst": false, 00:14:19.052 "ddgst": false 00:14:19.052 }, 00:14:19.052 "method": "bdev_nvme_attach_controller" 00:14:19.052 }' 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:19.052 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:19.052 "params": { 00:14:19.052 "name": "Nvme1", 00:14:19.052 "trtype": "tcp", 00:14:19.052 "traddr": "10.0.0.2", 00:14:19.052 "adrfam": "ipv4", 00:14:19.052 "trsvcid": "4420", 00:14:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.052 "hdgst": false, 00:14:19.053 "ddgst": false 00:14:19.053 }, 00:14:19.053 "method": "bdev_nvme_attach_controller" 00:14:19.053 }' 00:14:19.053 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:19.053 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:19.053 "params": { 00:14:19.053 "name": "Nvme1", 00:14:19.053 "trtype": "tcp", 00:14:19.053 "traddr": "10.0.0.2", 00:14:19.053 "adrfam": "ipv4", 00:14:19.053 "trsvcid": "4420", 00:14:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.053 "hdgst": false, 00:14:19.053 "ddgst": false 00:14:19.053 }, 00:14:19.053 "method": "bdev_nvme_attach_controller" 00:14:19.053 }' 00:14:19.053 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:19.053 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:19.053 "params": { 00:14:19.053 "name": "Nvme1", 00:14:19.053 "trtype": "tcp", 00:14:19.053 "traddr": "10.0.0.2", 00:14:19.053 "adrfam": "ipv4", 00:14:19.053 "trsvcid": "4420", 00:14:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.053 "hdgst": false, 00:14:19.053 "ddgst": false 00:14:19.053 }, 00:14:19.053 "method": "bdev_nvme_attach_controller" 00:14:19.053 }' 00:14:19.053 [2024-12-09 23:56:03.387916] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:19.053 [2024-12-09 23:56:03.387969] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:19.053 [2024-12-09 23:56:03.390192] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:19.053 [2024-12-09 23:56:03.390235] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:19.053 [2024-12-09 23:56:03.390544] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:19.053 [2024-12-09 23:56:03.390585] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:19.053 [2024-12-09 23:56:03.390717] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:19.053 [2024-12-09 23:56:03.390758] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:19.313 [2024-12-09 23:56:03.571379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.313 [2024-12-09 23:56:03.612737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.313 [2024-12-09 23:56:03.666827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.313 [2024-12-09 23:56:03.708745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.313 [2024-12-09 23:56:03.761043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.573 [2024-12-09 23:56:03.802840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.573 [2024-12-09 23:56:03.862222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.573 [2024-12-09 23:56:03.909215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:14:19.573 Running I/O for 1 seconds... 00:14:19.573 Running I/O for 1 seconds... 00:14:19.573 Running I/O for 1 seconds... 00:14:19.834 Running I/O for 1 seconds... 00:14:20.776 6278.00 IOPS, 24.52 MiB/s [2024-12-09T22:56:05.249Z] 247296.00 IOPS, 966.00 MiB/s [2024-12-09T22:56:05.249Z] 13882.00 IOPS, 54.23 MiB/s 00:14:20.776 Latency(us) 00:14:20.776 [2024-12-09T22:56:05.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.776 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:20.776 Nvme1n1 : 1.00 246932.61 964.58 0.00 0.00 515.95 217.91 1461.45 00:14:20.776 [2024-12-09T22:56:05.249Z] =================================================================================================================== 00:14:20.776 [2024-12-09T22:56:05.249Z] Total : 246932.61 964.58 0.00 0.00 515.95 217.91 1461.45 00:14:20.776 00:14:20.776 Latency(us) 00:14:20.776 [2024-12-09T22:56:05.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.776 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:20.776 Nvme1n1 : 1.01 13943.98 54.47 0.00 0.00 9153.39 4115.66 18454.94 00:14:20.776 [2024-12-09T22:56:05.249Z] =================================================================================================================== 00:14:20.776 [2024-12-09T22:56:05.249Z] Total : 13943.98 54.47 0.00 0.00 9153.39 4115.66 18454.94 00:14:20.776 00:14:20.776 Latency(us) 00:14:20.776 [2024-12-09T22:56:05.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.776 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:20.776 Nvme1n1 : 1.02 6311.82 24.66 0.00 0.00 20081.24 10538.19 31876.71 00:14:20.776 [2024-12-09T22:56:05.249Z] =================================================================================================================== 00:14:20.776 [2024-12-09T22:56:05.249Z] Total : 6311.82 24.66 0.00 0.00 20081.24 10538.19 31876.71 00:14:20.776 6825.00 IOPS, 26.66 MiB/s 00:14:20.776 Latency(us) 00:14:20.776 [2024-12-09T22:56:05.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.776 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:20.776 Nvme1n1 : 1.01 6927.70 27.06 0.00 0.00 18429.50 3486.52 40684.75 00:14:20.776 [2024-12-09T22:56:05.249Z] =================================================================================================================== 00:14:20.776 [2024-12-09T22:56:05.249Z] Total : 6927.70 27.06 0.00 0.00 18429.50 3486.52 40684.75 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 275767 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 275769 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 275772 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.776 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.036 rmmod nvme_tcp 00:14:21.036 rmmod nvme_fabrics 00:14:21.036 rmmod nvme_keyring 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 275612 ']' 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 275612 00:14:21.036 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 275612 ']' 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 275612 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275612 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275612' 00:14:21.037 killing process with pid 275612 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 275612 00:14:21.037 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 275612 00:14:21.297 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.297 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.297 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.297 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.298 23:56:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.303 00:14:23.303 real 0m12.944s 00:14:23.303 user 0m19.455s 00:14:23.303 sys 0m7.492s 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 ************************************ 00:14:23.303 END TEST nvmf_bdev_io_wait 00:14:23.303 ************************************ 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.303 ************************************ 00:14:23.303 START TEST nvmf_queue_depth 00:14:23.303 ************************************ 00:14:23.303 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:23.564 * Looking for test storage... 00:14:23.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.564 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.565 --rc genhtml_branch_coverage=1 00:14:23.565 --rc genhtml_function_coverage=1 00:14:23.565 --rc genhtml_legend=1 00:14:23.565 --rc geninfo_all_blocks=1 00:14:23.565 --rc geninfo_unexecuted_blocks=1 00:14:23.565 00:14:23.565 ' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.565 --rc genhtml_branch_coverage=1 00:14:23.565 --rc genhtml_function_coverage=1 00:14:23.565 --rc genhtml_legend=1 00:14:23.565 --rc geninfo_all_blocks=1 00:14:23.565 --rc geninfo_unexecuted_blocks=1 00:14:23.565 00:14:23.565 ' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.565 --rc genhtml_branch_coverage=1 00:14:23.565 --rc genhtml_function_coverage=1 00:14:23.565 --rc genhtml_legend=1 00:14:23.565 --rc geninfo_all_blocks=1 00:14:23.565 --rc geninfo_unexecuted_blocks=1 00:14:23.565 00:14:23.565 ' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.565 --rc genhtml_branch_coverage=1 00:14:23.565 --rc genhtml_function_coverage=1 00:14:23.565 --rc genhtml_legend=1 00:14:23.565 --rc geninfo_all_blocks=1 00:14:23.565 --rc geninfo_unexecuted_blocks=1 00:14:23.565 00:14:23.565 ' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.565 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.706 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.706 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:14:31.706 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:31.706 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:31.706 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:31.707 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:31.707 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:31.707 Found net devices under 0000:af:00.0: cvl_0_0 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:31.707 Found net devices under 0000:af:00.1: cvl_0_1 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.707 23:56:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:31.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:14:31.707 00:14:31.707 --- 10.0.0.2 ping statistics --- 00:14:31.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.707 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:14:31.707 00:14:31.707 --- 10.0.0.1 ping statistics --- 00:14:31.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.707 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.707 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=280010 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 280010 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 280010 ']' 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.708 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.708 [2024-12-09 23:56:15.337929] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:31.708 [2024-12-09 23:56:15.337975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.708 [2024-12-09 23:56:15.433620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.708 [2024-12-09 23:56:15.473835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.708 [2024-12-09 23:56:15.473873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.708 [2024-12-09 23:56:15.473883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.708 [2024-12-09 23:56:15.473891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.708 [2024-12-09 23:56:15.473898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.708 [2024-12-09 23:56:15.474486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.708 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.708 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:31.708 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.708 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.708 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.968 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.968 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.968 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.968 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 [2024-12-09 23:56:16.220074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 Malloc0 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 [2024-12-09 23:56:16.270593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=280127 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 280127 /var/tmp/bdevperf.sock 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 280127 ']' 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.969 23:56:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 [2024-12-09 23:56:16.324860] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:14:31.969 [2024-12-09 23:56:16.324907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280127 ] 00:14:31.969 [2024-12-09 23:56:16.415101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.229 [2024-12-09 23:56:16.456767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.800 NVMe0n1 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.800 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.060 Running I/O for 10 seconds... 00:14:34.945 12288.00 IOPS, 48.00 MiB/s [2024-12-09T22:56:20.802Z] 12639.00 IOPS, 49.37 MiB/s [2024-12-09T22:56:21.373Z] 12604.00 IOPS, 49.23 MiB/s [2024-12-09T22:56:22.757Z] 12640.50 IOPS, 49.38 MiB/s [2024-12-09T22:56:23.700Z] 12677.80 IOPS, 49.52 MiB/s [2024-12-09T22:56:24.643Z] 12765.00 IOPS, 49.86 MiB/s [2024-12-09T22:56:25.585Z] 12813.29 IOPS, 50.05 MiB/s [2024-12-09T22:56:26.527Z] 12829.00 IOPS, 50.11 MiB/s [2024-12-09T22:56:27.470Z] 12842.44 IOPS, 50.17 MiB/s [2024-12-09T22:56:27.470Z] 12871.60 IOPS, 50.28 MiB/s 00:14:42.997 Latency(us) 00:14:42.997 [2024-12-09T22:56:27.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.997 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:42.997 Verification LBA range: start 0x0 length 0x4000 00:14:42.997 NVMe0n1 : 10.06 12893.41 50.36 0.00 0.00 79166.20 18140.36 54525.95 00:14:42.997 [2024-12-09T22:56:27.470Z] =================================================================================================================== 00:14:42.997 [2024-12-09T22:56:27.470Z] Total : 12893.41 50.36 0.00 0.00 79166.20 18140.36 54525.95 00:14:42.997 { 00:14:42.997 "results": [ 00:14:42.997 { 00:14:42.997 "job": "NVMe0n1", 00:14:42.997 "core_mask": "0x1", 00:14:42.997 "workload": "verify", 00:14:42.997 "status": "finished", 00:14:42.997 "verify_range": { 00:14:42.997 "start": 0, 00:14:42.997 "length": 16384 00:14:42.997 }, 00:14:42.997 "queue_depth": 1024, 00:14:42.997 "io_size": 4096, 00:14:42.997 "runtime": 10.062501, 00:14:42.997 "iops": 12893.414867735168, 00:14:42.997 "mibps": 50.3649018270905, 00:14:42.997 "io_failed": 0, 00:14:42.997 "io_timeout": 0, 00:14:42.997 "avg_latency_us": 79166.20294077386, 00:14:42.997 "min_latency_us": 18140.3648, 00:14:42.997 "max_latency_us": 54525.952 00:14:42.997 } 00:14:42.997 ], 00:14:42.997 "core_count": 1 00:14:42.997 } 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 280127 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 280127 ']' 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 280127 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280127 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280127' 00:14:43.258 killing process with pid 280127 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 280127 00:14:43.258 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.258 00:14:43.258 Latency(us) 00:14:43.258 [2024-12-09T22:56:27.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.258 [2024-12-09T22:56:27.731Z] =================================================================================================================== 00:14:43.258 [2024-12-09T22:56:27.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 280127 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:43.258 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:43.258 rmmod nvme_tcp 00:14:43.258 rmmod nvme_fabrics 00:14:43.520 rmmod nvme_keyring 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 280010 ']' 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 280010 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 280010 ']' 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 280010 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280010 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280010' 00:14:43.520 killing process with pid 280010 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 280010 00:14:43.520 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 280010 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.780 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.693 00:14:45.693 real 0m22.399s 00:14:45.693 user 0m25.354s 00:14:45.693 sys 0m7.525s 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:45.693 ************************************ 00:14:45.693 END TEST nvmf_queue_depth 00:14:45.693 ************************************ 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.693 23:56:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:45.954 ************************************ 00:14:45.954 START TEST nvmf_target_multipath 00:14:45.954 ************************************ 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:45.954 * Looking for test storage... 00:14:45.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.954 --rc genhtml_branch_coverage=1 00:14:45.954 --rc genhtml_function_coverage=1 00:14:45.954 --rc genhtml_legend=1 00:14:45.954 --rc geninfo_all_blocks=1 00:14:45.954 --rc geninfo_unexecuted_blocks=1 00:14:45.954 00:14:45.954 ' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.954 --rc genhtml_branch_coverage=1 00:14:45.954 --rc genhtml_function_coverage=1 00:14:45.954 --rc genhtml_legend=1 00:14:45.954 --rc geninfo_all_blocks=1 00:14:45.954 --rc geninfo_unexecuted_blocks=1 00:14:45.954 00:14:45.954 ' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.954 --rc genhtml_branch_coverage=1 00:14:45.954 --rc genhtml_function_coverage=1 00:14:45.954 --rc genhtml_legend=1 00:14:45.954 --rc geninfo_all_blocks=1 00:14:45.954 --rc geninfo_unexecuted_blocks=1 00:14:45.954 00:14:45.954 ' 00:14:45.954 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.954 --rc genhtml_branch_coverage=1 00:14:45.954 --rc genhtml_function_coverage=1 00:14:45.954 --rc genhtml_legend=1 00:14:45.954 --rc geninfo_all_blocks=1 00:14:45.954 --rc geninfo_unexecuted_blocks=1 00:14:45.954 00:14:45.954 ' 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.955 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:14:46.216 23:56:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.355 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.356 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.356 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:54.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:14:54.356 00:14:54.356 --- 10.0.0.2 ping statistics --- 00:14:54.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.356 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:14:54.356 00:14:54.356 --- 10.0.0.1 ping statistics --- 00:14:54.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.356 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:54.356 only one NIC for nvmf test 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.356 rmmod nvme_tcp 00:14:54.356 rmmod nvme_fabrics 00:14:54.356 rmmod nvme_keyring 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.356 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.741 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.741 00:14:55.741 real 0m9.809s 00:14:55.741 user 0m2.273s 00:14:55.741 sys 0m5.597s 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:55.741 ************************************ 00:14:55.741 END TEST nvmf_target_multipath 00:14:55.741 ************************************ 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:55.741 ************************************ 00:14:55.741 START TEST nvmf_zcopy 00:14:55.741 ************************************ 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:55.741 * Looking for test storage... 00:14:55.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:14:55.741 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.003 --rc genhtml_branch_coverage=1 00:14:56.003 --rc genhtml_function_coverage=1 00:14:56.003 --rc genhtml_legend=1 00:14:56.003 --rc geninfo_all_blocks=1 00:14:56.003 --rc geninfo_unexecuted_blocks=1 00:14:56.003 00:14:56.003 ' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.003 --rc genhtml_branch_coverage=1 00:14:56.003 --rc genhtml_function_coverage=1 00:14:56.003 --rc genhtml_legend=1 00:14:56.003 --rc geninfo_all_blocks=1 00:14:56.003 --rc geninfo_unexecuted_blocks=1 00:14:56.003 00:14:56.003 ' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.003 --rc genhtml_branch_coverage=1 00:14:56.003 --rc genhtml_function_coverage=1 00:14:56.003 --rc genhtml_legend=1 00:14:56.003 --rc geninfo_all_blocks=1 00:14:56.003 --rc geninfo_unexecuted_blocks=1 00:14:56.003 00:14:56.003 ' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.003 --rc genhtml_branch_coverage=1 00:14:56.003 --rc genhtml_function_coverage=1 00:14:56.003 --rc genhtml_legend=1 00:14:56.003 --rc geninfo_all_blocks=1 00:14:56.003 --rc geninfo_unexecuted_blocks=1 00:14:56.003 00:14:56.003 ' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.003 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.004 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:04.146 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:04.146 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:04.146 Found net devices under 0000:af:00.0: cvl_0_0 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:04.146 Found net devices under 0000:af:00.1: cvl_0_1 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:04.146 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:04.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:15:04.147 00:15:04.147 --- 10.0.0.2 ping statistics --- 00:15:04.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.147 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:04.147 00:15:04.147 --- 10.0.0.1 ping statistics --- 00:15:04.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.147 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=289676 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 289676 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 289676 ']' 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.147 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 [2024-12-09 23:56:47.716546] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:15:04.147 [2024-12-09 23:56:47.716598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.147 [2024-12-09 23:56:47.811830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.147 [2024-12-09 23:56:47.851924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.147 [2024-12-09 23:56:47.851960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.147 [2024-12-09 23:56:47.851970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.147 [2024-12-09 23:56:47.851979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.147 [2024-12-09 23:56:47.851986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.147 [2024-12-09 23:56:47.852578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 [2024-12-09 23:56:48.594469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.147 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.147 [2024-12-09 23:56:48.614642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.408 malloc0 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:04.408 { 00:15:04.408 "params": { 00:15:04.408 "name": "Nvme$subsystem", 00:15:04.408 "trtype": "$TEST_TRANSPORT", 00:15:04.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.408 "adrfam": "ipv4", 00:15:04.408 "trsvcid": "$NVMF_PORT", 00:15:04.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.408 "hdgst": ${hdgst:-false}, 00:15:04.408 "ddgst": ${ddgst:-false} 00:15:04.408 }, 00:15:04.408 "method": "bdev_nvme_attach_controller" 00:15:04.408 } 00:15:04.408 EOF 00:15:04.408 )") 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:15:04.408 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:04.408 "params": { 00:15:04.408 "name": "Nvme1", 00:15:04.408 "trtype": "tcp", 00:15:04.408 "traddr": "10.0.0.2", 00:15:04.408 "adrfam": "ipv4", 00:15:04.408 "trsvcid": "4420", 00:15:04.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.408 "hdgst": false, 00:15:04.408 "ddgst": false 00:15:04.408 }, 00:15:04.408 "method": "bdev_nvme_attach_controller" 00:15:04.408 }' 00:15:04.408 [2024-12-09 23:56:48.703871] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:15:04.408 [2024-12-09 23:56:48.703917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289817 ] 00:15:04.409 [2024-12-09 23:56:48.793051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.409 [2024-12-09 23:56:48.831915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.669 Running I/O for 10 seconds... 00:15:06.996 8811.00 IOPS, 68.84 MiB/s [2024-12-09T22:56:52.410Z] 8807.00 IOPS, 68.80 MiB/s [2024-12-09T22:56:53.351Z] 8827.67 IOPS, 68.97 MiB/s [2024-12-09T22:56:54.293Z] 8852.25 IOPS, 69.16 MiB/s [2024-12-09T22:56:55.239Z] 8868.20 IOPS, 69.28 MiB/s [2024-12-09T22:56:56.180Z] 8878.83 IOPS, 69.37 MiB/s [2024-12-09T22:56:57.121Z] 8886.43 IOPS, 69.43 MiB/s [2024-12-09T22:56:58.063Z] 8899.00 IOPS, 69.52 MiB/s [2024-12-09T22:56:59.447Z] 8903.22 IOPS, 69.56 MiB/s [2024-12-09T22:56:59.447Z] 8911.00 IOPS, 69.62 MiB/s 00:15:14.974 Latency(us) 00:15:14.974 [2024-12-09T22:56:59.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.974 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:14.974 Verification LBA range: start 0x0 length 0x1000 00:15:14.974 Nvme1n1 : 10.01 8909.36 69.60 0.00 0.00 14326.75 2293.76 21915.24 00:15:14.974 [2024-12-09T22:56:59.447Z] =================================================================================================================== 00:15:14.974 [2024-12-09T22:56:59.447Z] Total : 8909.36 69.60 0.00 0.00 14326.75 2293.76 21915.24 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=291657 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:14.975 { 00:15:14.975 "params": { 00:15:14.975 "name": "Nvme$subsystem", 00:15:14.975 "trtype": "$TEST_TRANSPORT", 00:15:14.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.975 "adrfam": "ipv4", 00:15:14.975 "trsvcid": "$NVMF_PORT", 00:15:14.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.975 "hdgst": ${hdgst:-false}, 00:15:14.975 "ddgst": ${ddgst:-false} 00:15:14.975 }, 00:15:14.975 "method": "bdev_nvme_attach_controller" 00:15:14.975 } 00:15:14.975 EOF 00:15:14.975 )") 00:15:14.975 [2024-12-09 23:56:59.227559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.227593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:15:14.975 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:14.975 "params": { 00:15:14.975 "name": "Nvme1", 00:15:14.975 "trtype": "tcp", 00:15:14.975 "traddr": "10.0.0.2", 00:15:14.975 "adrfam": "ipv4", 00:15:14.975 "trsvcid": "4420", 00:15:14.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.975 "hdgst": false, 00:15:14.975 "ddgst": false 00:15:14.975 }, 00:15:14.975 "method": "bdev_nvme_attach_controller" 00:15:14.975 }' 00:15:14.975 [2024-12-09 23:56:59.239559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.239579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.251586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.251599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.263618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.263629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.269797] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:15:14.975 [2024-12-09 23:56:59.269849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291657 ] 00:15:14.975 [2024-12-09 23:56:59.275664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.275677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.287680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.287692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.299712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.299723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.311747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.311758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.323779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.323792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.335810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.335821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.347845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.347857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.359875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.359887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.362341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.975 [2024-12-09 23:56:59.371908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.371924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.383937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.383952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.395968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.395991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.401688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.975 [2024-12-09 23:56:59.408008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.408021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.420043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.420064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.432068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.432085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:14.975 [2024-12-09 23:56:59.444098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:14.975 [2024-12-09 23:56:59.444112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.456128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.456141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.468161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.468175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.480190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.480202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.492238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.492261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.504260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.504277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.516294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.516310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.528325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.528342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.540355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.540370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.589095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.589114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.600535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.600548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 Running I/O for 5 seconds... 00:15:15.235 [2024-12-09 23:56:59.616097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.616119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.629428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.629451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.643613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.643635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.657118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.657149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.670843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.670880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.684679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.684700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.235 [2024-12-09 23:56:59.699011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.235 [2024-12-09 23:56:59.699034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.710345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.710371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.724220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.724240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.738046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.738066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.751586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.751607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.765746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.765768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.779128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.779149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.793063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.793085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.806850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.806871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.820531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.820553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.833820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.833848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.847418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.847439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.861289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.861311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.874704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.874725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.888356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.888377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.902006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.902027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.915558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.915580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.928997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.929018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.942473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.942494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.496 [2024-12-09 23:56:59.956119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.496 [2024-12-09 23:56:59.956144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:56:59.969796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:56:59.969821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:56:59.983685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:56:59.983706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:56:59.994157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:56:59.994178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.008236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.008259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.021852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.021876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.035409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.035432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.044788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.044809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.059249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.059271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.073145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.073169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.086832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.086854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.100699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.100720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.114410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.114431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.128038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.128060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.141640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.141661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.154961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.154982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.168616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.168638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.182188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.182209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.196150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.196171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.209527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.209548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:15.757 [2024-12-09 23:57:00.223422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:15.757 [2024-12-09 23:57:00.223450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.237213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.237234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.251038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.251059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.264391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.264417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.278046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.278067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.291636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.291657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.305029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.305051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.318459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.318480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.332344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.332365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.346004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.346025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.359959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.359980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.373678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.373699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.386986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.387006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.400859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.018 [2024-12-09 23:57:00.400879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.018 [2024-12-09 23:57:00.414713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.414734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.019 [2024-12-09 23:57:00.428512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.428533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.019 [2024-12-09 23:57:00.442196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.442218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.019 [2024-12-09 23:57:00.455702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.455723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.019 [2024-12-09 23:57:00.469655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.469675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.019 [2024-12-09 23:57:00.482956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.019 [2024-12-09 23:57:00.482982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.496573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.496594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.510273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.510293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.523978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.523998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.537541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.537561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.551324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.551343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.564687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.564706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.578288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.578308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.592203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.592223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 16965.00 IOPS, 132.54 MiB/s [2024-12-09T22:57:00.752Z] [2024-12-09 23:57:00.605784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.605804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.619316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.619336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.632833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.632857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.646367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.646387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.279 [2024-12-09 23:57:00.659999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.279 [2024-12-09 23:57:00.660019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.673442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.673463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.686840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.686876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.700959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.700980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.714284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.714305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.727718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.727738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.280 [2024-12-09 23:57:00.741661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.280 [2024-12-09 23:57:00.741681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.756205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.756228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.769764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.769785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.783639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.783658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.797237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.797258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.810784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.810804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.824450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.824471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.838269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.838290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.851712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.851731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.865428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.865448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.879368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.879389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.892804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.892830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.906159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.906180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.539 [2024-12-09 23:57:00.919592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.539 [2024-12-09 23:57:00.919612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:00.933182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:00.933202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:00.946653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:00.946673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:00.960224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:00.960244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:00.974057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:00.974078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:00.987807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:00.987833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.540 [2024-12-09 23:57:01.001353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.540 [2024-12-09 23:57:01.001377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.014843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.014862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.028475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.028495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.042150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.042170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.055810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.055835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.069429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.069450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.082648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.082668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.096585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.096605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.110478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.110499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.124354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.124374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.138408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.138428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.151840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.151860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.165250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.165272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.179245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.179265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.192544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.192564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.206154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.206175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.219649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.219670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.233359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.233379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.247288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.247312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.800 [2024-12-09 23:57:01.260771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:16.800 [2024-12-09 23:57:01.260792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.274569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.274590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.288287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.288308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.302205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.302227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.315622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.315643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.329278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.329300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.342961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.342982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.356854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.356877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.370440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.370463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.384020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.384042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.397253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.397275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.410863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.410884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.424207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.424228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.438320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.438341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.452096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.452117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.465951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.465971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.479998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.480018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.493506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.493527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.507094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.507120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.061 [2024-12-09 23:57:01.520795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.061 [2024-12-09 23:57:01.520816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.322 [2024-12-09 23:57:01.534562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.322 [2024-12-09 23:57:01.534583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.322 [2024-12-09 23:57:01.548585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.322 [2024-12-09 23:57:01.548607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.322 [2024-12-09 23:57:01.562202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.322 [2024-12-09 23:57:01.562222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.322 [2024-12-09 23:57:01.575740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.322 [2024-12-09 23:57:01.575760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.322 [2024-12-09 23:57:01.589292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.589312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.602954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.602975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 17085.50 IOPS, 133.48 MiB/s [2024-12-09T22:57:01.796Z] [2024-12-09 23:57:01.616268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.616288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.630006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.630027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.643575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.643599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.657745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.657770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.671263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.671285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.684930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.684952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.698534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.698555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.712204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.712225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.725635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.725656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.739433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.739456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.753296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.753317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.767184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.767210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.781039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.781060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.323 [2024-12-09 23:57:01.794317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.323 [2024-12-09 23:57:01.794338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.583 [2024-12-09 23:57:01.807919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.583 [2024-12-09 23:57:01.807940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.583 [2024-12-09 23:57:01.821561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.583 [2024-12-09 23:57:01.821582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.583 [2024-12-09 23:57:01.834837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.583 [2024-12-09 23:57:01.834858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.848319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.848340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.861737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.861757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.875234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.875254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.888302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.888322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.901573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.901592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.915033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.915054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.928633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.928654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.942277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.942298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.955978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.955999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.969583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.969604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.983289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.983309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:01.997446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:01.997467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:02.010669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:02.010688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:02.024365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:02.024385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:02.037994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:02.038013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.584 [2024-12-09 23:57:02.051863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.584 [2024-12-09 23:57:02.051883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.065271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.065290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.078947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.078967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.092560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.092580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.106110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.106131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.119526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.119546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.133043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.133063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.146678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.844 [2024-12-09 23:57:02.146699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.844 [2024-12-09 23:57:02.160180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.160201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.173777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.173797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.187682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.187702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.201006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.201026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.214617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.214637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.228103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.228123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.241199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.241219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.254860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.254880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.268169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.268189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.281601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.281622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.295287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.295307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.845 [2024-12-09 23:57:02.309136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.845 [2024-12-09 23:57:02.309155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.323196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.323217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.333985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.334004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.348251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.348271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.361810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.361836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.375510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.375530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.389327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.389348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.402786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.402807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.416356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.416376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.429748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.429768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.443099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.443119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.456637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.456658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.470843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.470863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.486859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.486879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.500492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.500512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.513911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.513931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.527214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.527234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.541127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.541147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.554629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.554650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.106 [2024-12-09 23:57:02.568073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.106 [2024-12-09 23:57:02.568093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.581369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.581390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.595346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.595367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.609024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.609044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 17154.33 IOPS, 134.02 MiB/s [2024-12-09T22:57:02.841Z] [2024-12-09 23:57:02.622764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.622786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.636179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.636200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.650015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.650035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.663763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.663782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.677139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.677159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.690895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.690915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.704487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.704511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.718013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.718034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.731484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.731507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.744999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.745019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.758633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.758652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.771867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.771888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.785364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.785389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.798771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.798791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.812341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.812362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.825570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.825589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.368 [2024-12-09 23:57:02.839338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.368 [2024-12-09 23:57:02.839359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.853185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.853206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.866610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.866632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.880164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.880185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.893564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.893585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.907218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.907239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.920730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.920751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.934515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.934537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.948128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.948149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.961772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.961793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.975590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.975611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:02.988875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:02.988897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.002921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.002942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.014309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.014329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.028413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.028433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.041814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.041848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.055193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.055214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.068664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.068685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.081962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.081984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.629 [2024-12-09 23:57:03.095097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.629 [2024-12-09 23:57:03.095118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.108702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.108722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.122934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.122955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.133427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.133448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.147678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.147699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.161565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.161585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.174859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.174881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.188422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.188442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.201797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.201818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.215376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.215396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.229696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.889 [2024-12-09 23:57:03.229717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.889 [2024-12-09 23:57:03.243230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.243255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.256884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.256905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.270497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.270518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.284086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.284107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.297604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.297628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.311123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.311144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.325021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.325042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.338242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.338263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.890 [2024-12-09 23:57:03.351430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.890 [2024-12-09 23:57:03.351451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.364788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.364809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.378291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.378312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.391979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.392000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.405729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.405751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.419238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.419258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.432750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.432770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.446819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.446845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.458334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.458355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.150 [2024-12-09 23:57:03.472200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.150 [2024-12-09 23:57:03.472220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.485763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.485782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.499510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.499531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.513123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.513142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.526806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.526832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.540532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.540552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.554181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.554205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.567607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.567627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.581130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.581150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.594712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.594733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 [2024-12-09 23:57:03.608226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.608250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.151 17174.00 IOPS, 134.17 MiB/s [2024-12-09T22:57:03.624Z] [2024-12-09 23:57:03.621952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.151 [2024-12-09 23:57:03.621971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.635456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.635476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.648809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.648836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.662377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.662397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.676129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.676149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.689835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.689855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.703561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.703581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.717350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.717371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.730557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.730578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.744469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.411 [2024-12-09 23:57:03.744489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.411 [2024-12-09 23:57:03.757903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.757924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.771616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.771639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.785512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.785533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.798884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.798904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.812577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.812597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.826422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.826443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.839731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.839753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.853471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.853492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.867182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.867202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.412 [2024-12-09 23:57:03.881094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.412 [2024-12-09 23:57:03.881114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.894395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.894415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.908106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.908126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.921894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.921923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.935613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.935632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.948909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.948929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.962984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.963004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.976700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.976721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:03.989913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:03.989934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.003209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.003230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.016905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.016926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.031111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.031131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.044395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.044416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.058062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.058083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.071962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.071994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.086136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.086157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.099767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.099788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.113588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.113608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.126968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.126989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.673 [2024-12-09 23:57:04.140568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.673 [2024-12-09 23:57:04.140589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.154685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.154706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.168347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.168368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.181956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.181976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.195438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.195458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.208972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.208992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.222117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.222138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.235599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.235620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.249359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.249379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.262942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.262963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.276241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.276261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.289993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.290014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.934 [2024-12-09 23:57:04.303424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.934 [2024-12-09 23:57:04.303444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.317358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.317384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.330919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.330940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.344862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.344882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.358592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.358613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.372566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.372587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.386671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.386692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.935 [2024-12-09 23:57:04.399881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.935 [2024-12-09 23:57:04.399902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.195 [2024-12-09 23:57:04.413501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.195 [2024-12-09 23:57:04.413523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.426810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.426838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.440370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.440392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.454018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.454041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.468155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.468176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.482840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.482862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.496750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.496772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.510409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.510432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.524177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.524198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.537897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.537917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.551755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.551777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.565411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.565432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.579124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.579150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.592816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.592844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.606273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.606293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 17182.00 IOPS, 134.23 MiB/s [2024-12-09T22:57:04.669Z] [2024-12-09 23:57:04.618789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.618810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 00:15:20.196 Latency(us) 00:15:20.196 [2024-12-09T22:57:04.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.196 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:20.196 Nvme1n1 : 5.01 17184.57 134.25 0.00 0.00 7441.15 2713.19 18769.51 00:15:20.196 [2024-12-09T22:57:04.669Z] =================================================================================================================== 00:15:20.196 [2024-12-09T22:57:04.669Z] Total : 17184.57 134.25 0.00 0.00 7441.15 2713.19 18769.51 00:15:20.196 [2024-12-09 23:57:04.628370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.628390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.640399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.640415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.652444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.652465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.196 [2024-12-09 23:57:04.664467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.196 [2024-12-09 23:57:04.664485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.676499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.676517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.688525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.688541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.700559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.700576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.712592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.712608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.724623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.724640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.736653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.736667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.748688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.748701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.760716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.760730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 [2024-12-09 23:57:04.772746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.457 [2024-12-09 23:57:04.772763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (291657) - No such process 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 291657 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.457 delay0 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.457 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:20.718 [2024-12-09 23:57:04.931762] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:27.299 Initializing NVMe Controllers 00:15:27.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.299 Initialization complete. Launching workers. 00:15:27.299 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 371 00:15:27.299 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 657, failed to submit 34 00:15:27.299 success 470, unsuccessful 187, failed 0 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.299 rmmod nvme_tcp 00:15:27.299 rmmod nvme_fabrics 00:15:27.299 rmmod nvme_keyring 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 289676 ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 289676 ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289676' 00:15:27.299 killing process with pid 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 289676 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.299 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.300 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.213 00:15:29.213 real 0m33.362s 00:15:29.213 user 0m43.413s 00:15:29.213 sys 0m12.527s 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:29.213 ************************************ 00:15:29.213 END TEST nvmf_zcopy 00:15:29.213 ************************************ 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:29.213 ************************************ 00:15:29.213 START TEST nvmf_nmic 00:15:29.213 ************************************ 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:29.213 * Looking for test storage... 00:15:29.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:15:29.213 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:29.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.475 --rc genhtml_branch_coverage=1 00:15:29.475 --rc genhtml_function_coverage=1 00:15:29.475 --rc genhtml_legend=1 00:15:29.475 --rc geninfo_all_blocks=1 00:15:29.475 --rc geninfo_unexecuted_blocks=1 00:15:29.475 00:15:29.475 ' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:29.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.475 --rc genhtml_branch_coverage=1 00:15:29.475 --rc genhtml_function_coverage=1 00:15:29.475 --rc genhtml_legend=1 00:15:29.475 --rc geninfo_all_blocks=1 00:15:29.475 --rc geninfo_unexecuted_blocks=1 00:15:29.475 00:15:29.475 ' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:29.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.475 --rc genhtml_branch_coverage=1 00:15:29.475 --rc genhtml_function_coverage=1 00:15:29.475 --rc genhtml_legend=1 00:15:29.475 --rc geninfo_all_blocks=1 00:15:29.475 --rc geninfo_unexecuted_blocks=1 00:15:29.475 00:15:29.475 ' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:29.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.475 --rc genhtml_branch_coverage=1 00:15:29.475 --rc genhtml_function_coverage=1 00:15:29.475 --rc genhtml_legend=1 00:15:29.475 --rc geninfo_all_blocks=1 00:15:29.475 --rc geninfo_unexecuted_blocks=1 00:15:29.475 00:15:29.475 ' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.475 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.476 23:57:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:37.615 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:37.616 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:37.616 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:37.616 Found net devices under 0000:af:00.0: cvl_0_0 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:37.616 Found net devices under 0000:af:00.1: cvl_0_1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:37.616 23:57:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:37.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:15:37.616 00:15:37.616 --- 10.0.0.2 ping statistics --- 00:15:37.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.616 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:37.616 00:15:37.616 --- 10.0.0.1 ping statistics --- 00:15:37.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.616 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=298001 00:15:37.616 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 298001 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 298001 ']' 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.617 [2024-12-09 23:57:21.158473] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:15:37.617 [2024-12-09 23:57:21.158521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.617 [2024-12-09 23:57:21.255438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.617 [2024-12-09 23:57:21.294690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.617 [2024-12-09 23:57:21.294732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.617 [2024-12-09 23:57:21.294741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.617 [2024-12-09 23:57:21.294750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.617 [2024-12-09 23:57:21.294756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.617 [2024-12-09 23:57:21.296310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.617 [2024-12-09 23:57:21.296419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.617 [2024-12-09 23:57:21.296528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.617 [2024-12-09 23:57:21.296530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.617 23:57:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.617 [2024-12-09 23:57:22.048879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.617 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.877 Malloc0 00:15:37.877 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.877 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.877 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.877 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [2024-12-09 23:57:22.121687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:37.878 test case1: single bdev can't be used in multiple subsystems 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [2024-12-09 23:57:22.145564] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:37.878 [2024-12-09 23:57:22.145590] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:37.878 [2024-12-09 23:57:22.145601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.878 request: 00:15:37.878 { 00:15:37.878 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:37.878 "namespace": { 00:15:37.878 "bdev_name": "Malloc0", 00:15:37.878 "no_auto_visible": false, 00:15:37.878 "hide_metadata": false 00:15:37.878 }, 00:15:37.878 "method": "nvmf_subsystem_add_ns", 00:15:37.878 "req_id": 1 00:15:37.878 } 00:15:37.878 Got JSON-RPC error response 00:15:37.878 response: 00:15:37.878 { 00:15:37.878 "code": -32602, 00:15:37.878 "message": "Invalid parameters" 00:15:37.878 } 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:37.878 Adding namespace failed - expected result. 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:37.878 test case2: host connect to nvmf target in multiple paths 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:37.878 [2024-12-09 23:57:22.161735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.878 23:57:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.261 23:57:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:40.646 23:57:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.646 23:57:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:15:40.646 23:57:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.646 23:57:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:40.646 23:57:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:15:42.557 23:57:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:42.557 [global] 00:15:42.557 thread=1 00:15:42.557 invalidate=1 00:15:42.557 rw=write 00:15:42.557 time_based=1 00:15:42.557 runtime=1 00:15:42.557 ioengine=libaio 00:15:42.557 direct=1 00:15:42.557 bs=4096 00:15:42.557 iodepth=1 00:15:42.557 norandommap=0 00:15:42.557 numjobs=1 00:15:42.557 00:15:42.557 verify_dump=1 00:15:42.557 verify_backlog=512 00:15:42.557 verify_state_save=0 00:15:42.557 do_verify=1 00:15:42.557 verify=crc32c-intel 00:15:42.557 [job0] 00:15:42.557 filename=/dev/nvme0n1 00:15:42.557 Could not set queue depth (nvme0n1) 00:15:43.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:43.174 fio-3.35 00:15:43.174 Starting 1 thread 00:15:44.108 00:15:44.108 job0: (groupid=0, jobs=1): err= 0: pid=299230: Mon Dec 9 23:57:28 2024 00:15:44.108 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:15:44.108 slat (nsec): min=11103, max=30446, avg=24078.26, stdev=3205.78 00:15:44.108 clat (usec): min=40631, max=41884, avg=40995.23, stdev=220.34 00:15:44.108 lat (usec): min=40642, max=41909, avg=41019.31, stdev=221.22 00:15:44.108 clat percentiles (usec): 00:15:44.108 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:15:44.108 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:44.108 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:44.108 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:44.108 | 99.99th=[41681] 00:15:44.108 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:15:44.108 slat (nsec): min=11195, max=48717, avg=12265.61, stdev=2201.45 00:15:44.108 clat (usec): min=126, max=370, avg=151.18, stdev=25.12 00:15:44.108 lat (usec): min=138, max=418, avg=163.44, stdev=25.82 00:15:44.108 clat percentiles (usec): 00:15:44.108 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:15:44.108 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:15:44.108 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 155], 95.00th=[ 237], 00:15:44.108 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 371], 99.95th=[ 371], 00:15:44.108 | 99.99th=[ 371] 00:15:44.108 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:44.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:44.108 lat (usec) : 250=95.51%, 500=0.19% 00:15:44.108 lat (msec) : 50=4.30% 00:15:44.108 cpu : usr=0.19%, sys=0.68%, ctx=535, majf=0, minf=1 00:15:44.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:44.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.108 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:44.108 00:15:44.108 Run status group 0 (all jobs): 00:15:44.108 READ: bw=89.4KiB/s (91.6kB/s), 89.4KiB/s-89.4KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1029-1029msec 00:15:44.108 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:15:44.109 00:15:44.109 Disk stats (read/write): 00:15:44.109 nvme0n1: ios=69/512, merge=0/0, ticks=1011/73, in_queue=1084, util=95.49% 00:15:44.109 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.367 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.625 rmmod nvme_tcp 00:15:44.626 rmmod nvme_fabrics 00:15:44.626 rmmod nvme_keyring 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 298001 ']' 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 298001 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 298001 ']' 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 298001 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298001 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298001' 00:15:44.626 killing process with pid 298001 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 298001 00:15:44.626 23:57:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 298001 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.886 23:57:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.799 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:46.799 00:15:46.799 real 0m17.700s 00:15:46.799 user 0m41.933s 00:15:46.799 sys 0m6.870s 00:15:46.799 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.799 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:46.799 ************************************ 00:15:46.799 END TEST nvmf_nmic 00:15:46.799 ************************************ 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:47.059 ************************************ 00:15:47.059 START TEST nvmf_fio_target 00:15:47.059 ************************************ 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:47.059 * Looking for test storage... 00:15:47.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.059 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.060 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.321 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:47.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.321 --rc genhtml_branch_coverage=1 00:15:47.321 --rc genhtml_function_coverage=1 00:15:47.321 --rc genhtml_legend=1 00:15:47.321 --rc geninfo_all_blocks=1 00:15:47.321 --rc geninfo_unexecuted_blocks=1 00:15:47.321 00:15:47.321 ' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.322 --rc genhtml_branch_coverage=1 00:15:47.322 --rc genhtml_function_coverage=1 00:15:47.322 --rc genhtml_legend=1 00:15:47.322 --rc geninfo_all_blocks=1 00:15:47.322 --rc geninfo_unexecuted_blocks=1 00:15:47.322 00:15:47.322 ' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.322 --rc genhtml_branch_coverage=1 00:15:47.322 --rc genhtml_function_coverage=1 00:15:47.322 --rc genhtml_legend=1 00:15:47.322 --rc geninfo_all_blocks=1 00:15:47.322 --rc geninfo_unexecuted_blocks=1 00:15:47.322 00:15:47.322 ' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:47.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.322 --rc genhtml_branch_coverage=1 00:15:47.322 --rc genhtml_function_coverage=1 00:15:47.322 --rc genhtml_legend=1 00:15:47.322 --rc geninfo_all_blocks=1 00:15:47.322 --rc geninfo_unexecuted_blocks=1 00:15:47.322 00:15:47.322 ' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:47.322 23:57:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:55.454 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:55.454 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:55.454 Found net devices under 0000:af:00.0: cvl_0_0 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.454 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:55.454 Found net devices under 0000:af:00.1: cvl_0_1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:15:55.455 00:15:55.455 --- 10.0.0.2 ping statistics --- 00:15:55.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.455 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:15:55.455 00:15:55.455 --- 10.0.0.1 ping statistics --- 00:15:55.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.455 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=303197 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 303197 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 303197 ']' 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.455 23:57:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.455 [2024-12-09 23:57:38.932066] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:15:55.455 [2024-12-09 23:57:38.932115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.455 [2024-12-09 23:57:39.014950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.455 [2024-12-09 23:57:39.055674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.455 [2024-12-09 23:57:39.055717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.455 [2024-12-09 23:57:39.055726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.455 [2024-12-09 23:57:39.055735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.455 [2024-12-09 23:57:39.055742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.455 [2024-12-09 23:57:39.058843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.455 [2024-12-09 23:57:39.058890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.455 [2024-12-09 23:57:39.058998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.455 [2024-12-09 23:57:39.058999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:55.455 [2024-12-09 23:57:39.386252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:55.455 23:57:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.713 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:55.713 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.971 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:55.971 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:56.228 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.486 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:56.486 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.486 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:56.486 23:57:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.743 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:56.743 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:57.000 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.257 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:57.257 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.514 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:57.514 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.514 23:57:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.771 [2024-12-09 23:57:42.094925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.771 23:57:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:58.029 23:57:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:58.286 23:57:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:15:59.659 23:57:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:16:01.556 23:57:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:01.556 [global] 00:16:01.556 thread=1 00:16:01.556 invalidate=1 00:16:01.556 rw=write 00:16:01.556 time_based=1 00:16:01.556 runtime=1 00:16:01.556 ioengine=libaio 00:16:01.556 direct=1 00:16:01.556 bs=4096 00:16:01.556 iodepth=1 00:16:01.556 norandommap=0 00:16:01.556 numjobs=1 00:16:01.556 00:16:01.556 verify_dump=1 00:16:01.556 verify_backlog=512 00:16:01.556 verify_state_save=0 00:16:01.556 do_verify=1 00:16:01.556 verify=crc32c-intel 00:16:01.556 [job0] 00:16:01.556 filename=/dev/nvme0n1 00:16:01.556 [job1] 00:16:01.556 filename=/dev/nvme0n2 00:16:01.556 [job2] 00:16:01.556 filename=/dev/nvme0n3 00:16:01.556 [job3] 00:16:01.556 filename=/dev/nvme0n4 00:16:01.556 Could not set queue depth (nvme0n1) 00:16:01.556 Could not set queue depth (nvme0n2) 00:16:01.556 Could not set queue depth (nvme0n3) 00:16:01.556 Could not set queue depth (nvme0n4) 00:16:01.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.813 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.813 fio-3.35 00:16:01.813 Starting 4 threads 00:16:03.185 00:16:03.185 job0: (groupid=0, jobs=1): err= 0: pid=304733: Mon Dec 9 23:57:47 2024 00:16:03.185 read: IOPS=1528, BW=6115KiB/s (6262kB/s)(6164KiB/1008msec) 00:16:03.185 slat (nsec): min=8261, max=31143, avg=9212.16, stdev=1215.95 00:16:03.185 clat (usec): min=202, max=41940, avg=383.95, stdev=2332.54 00:16:03.185 lat (usec): min=211, max=41960, avg=393.16, stdev=2332.96 00:16:03.185 clat percentiles (usec): 00:16:03.185 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:16:03.185 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:16:03.185 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 302], 00:16:03.185 | 99.00th=[ 465], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41681], 00:16:03.185 | 99.99th=[41681] 00:16:03.185 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:16:03.185 slat (nsec): min=11340, max=45650, avg=13375.01, stdev=2857.55 00:16:03.185 clat (usec): min=110, max=360, avg=178.02, stdev=47.22 00:16:03.185 lat (usec): min=122, max=406, avg=191.40, stdev=47.46 00:16:03.185 clat percentiles (usec): 00:16:03.185 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 139], 00:16:03.185 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 167], 60.00th=[ 178], 00:16:03.185 | 70.00th=[ 190], 80.00th=[ 212], 90.00th=[ 258], 95.00th=[ 281], 00:16:03.185 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 318], 99.95th=[ 322], 00:16:03.185 | 99.99th=[ 363] 00:16:03.185 bw ( KiB/s): min= 7880, max= 8504, per=37.85%, avg=8192.00, stdev=441.23, samples=2 00:16:03.185 iops : min= 1970, max= 2126, avg=2048.00, stdev=110.31, samples=2 00:16:03.185 lat (usec) : 250=79.55%, 500=20.20%, 750=0.11% 00:16:03.186 lat (msec) : 50=0.14% 00:16:03.186 cpu : usr=3.08%, sys=3.87%, ctx=3589, majf=0, minf=2 00:16:03.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.186 job1: (groupid=0, jobs=1): err= 0: pid=304734: Mon Dec 9 23:57:47 2024 00:16:03.186 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:16:03.186 slat (nsec): min=11354, max=26235, avg=23988.39, stdev=2891.47 00:16:03.186 clat (usec): min=40754, max=41984, avg=41125.93, stdev=395.92 00:16:03.186 lat (usec): min=40765, max=42008, avg=41149.92, stdev=396.42 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:03.186 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:03.186 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:03.186 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:03.186 | 99.99th=[42206] 00:16:03.186 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:16:03.186 slat (nsec): min=12370, max=47258, avg=13619.65, stdev=2312.82 00:16:03.186 clat (usec): min=135, max=332, avg=166.94, stdev=22.70 00:16:03.186 lat (usec): min=148, max=379, avg=180.56, stdev=23.31 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:16:03.186 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:16:03.186 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 237], 00:16:03.186 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 334], 99.95th=[ 334], 00:16:03.186 | 99.99th=[ 334] 00:16:03.186 bw ( KiB/s): min= 4096, max= 4096, per=18.93%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.186 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.186 lat (usec) : 250=95.33%, 500=0.37% 00:16:03.186 lat (msec) : 50=4.30% 00:16:03.186 cpu : usr=0.48%, sys=0.96%, ctx=536, majf=0, minf=1 00:16:03.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.186 job2: (groupid=0, jobs=1): err= 0: pid=304735: Mon Dec 9 23:57:47 2024 00:16:03.186 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:16:03.186 slat (nsec): min=11039, max=24836, avg=23588.64, stdev=2820.05 00:16:03.186 clat (usec): min=40861, max=41971, avg=41054.37, stdev=289.37 00:16:03.186 lat (usec): min=40874, max=41995, avg=41077.96, stdev=289.78 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:03.186 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:03.186 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:03.186 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:03.186 | 99.99th=[42206] 00:16:03.186 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:16:03.186 slat (nsec): min=9429, max=47704, avg=12314.95, stdev=2146.62 00:16:03.186 clat (usec): min=139, max=291, avg=178.93, stdev=19.00 00:16:03.186 lat (usec): min=149, max=339, avg=191.25, stdev=19.66 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:16:03.186 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:16:03.186 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:16:03.186 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 293], 99.95th=[ 293], 00:16:03.186 | 99.99th=[ 293] 00:16:03.186 bw ( KiB/s): min= 4096, max= 4096, per=18.93%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.186 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.186 lat (usec) : 250=95.69%, 500=0.19% 00:16:03.186 lat (msec) : 50=4.12% 00:16:03.186 cpu : usr=0.40%, sys=0.60%, ctx=534, majf=0, minf=1 00:16:03.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.186 job3: (groupid=0, jobs=1): err= 0: pid=304736: Mon Dec 9 23:57:47 2024 00:16:03.186 read: IOPS=2147, BW=8591KiB/s (8798kB/s)(8600KiB/1001msec) 00:16:03.186 slat (nsec): min=9042, max=44310, avg=9968.53, stdev=1507.71 00:16:03.186 clat (usec): min=172, max=508, avg=246.37, stdev=40.66 00:16:03.186 lat (usec): min=186, max=520, avg=256.33, stdev=40.72 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:16:03.186 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:16:03.186 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 289], 00:16:03.186 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 510], 00:16:03.186 | 99.99th=[ 510] 00:16:03.186 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:03.186 slat (nsec): min=12453, max=54760, avg=13633.06, stdev=1851.39 00:16:03.186 clat (usec): min=118, max=331, avg=156.52, stdev=22.74 00:16:03.186 lat (usec): min=131, max=386, avg=170.16, stdev=23.10 00:16:03.186 clat percentiles (usec): 00:16:03.186 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:16:03.186 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:16:03.186 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 200], 00:16:03.186 | 99.00th=[ 237], 99.50th=[ 239], 99.90th=[ 255], 99.95th=[ 258], 00:16:03.186 | 99.99th=[ 330] 00:16:03.186 bw ( KiB/s): min=10600, max=10600, per=48.98%, avg=10600.00, stdev= 0.00, samples=1 00:16:03.186 iops : min= 2650, max= 2650, avg=2650.00, stdev= 0.00, samples=1 00:16:03.186 lat (usec) : 250=85.97%, 500=13.95%, 750=0.08% 00:16:03.186 cpu : usr=4.90%, sys=7.90%, ctx=4711, majf=0, minf=1 00:16:03.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.186 issued rwts: total=2150,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.186 00:16:03.186 Run status group 0 (all jobs): 00:16:03.186 READ: bw=14.0MiB/s (14.7MB/s), 87.7KiB/s-8591KiB/s (89.8kB/s-8798kB/s), io=14.6MiB (15.3MB), run=1001-1041msec 00:16:03.186 WRITE: bw=21.1MiB/s (22.2MB/s), 1967KiB/s-9.99MiB/s (2015kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1041msec 00:16:03.186 00:16:03.186 Disk stats (read/write): 00:16:03.186 nvme0n1: ios=1586/2048, merge=0/0, ticks=387/359, in_queue=746, util=83.77% 00:16:03.186 nvme0n2: ios=46/512, merge=0/0, ticks=1685/80, in_queue=1765, util=99.69% 00:16:03.186 nvme0n3: ios=17/512, merge=0/0, ticks=698/87, in_queue=785, util=88.24% 00:16:03.186 nvme0n4: ios=1772/2048, merge=0/0, ticks=1360/291, in_queue=1651, util=99.57% 00:16:03.186 23:57:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:03.186 [global] 00:16:03.186 thread=1 00:16:03.186 invalidate=1 00:16:03.186 rw=randwrite 00:16:03.186 time_based=1 00:16:03.186 runtime=1 00:16:03.186 ioengine=libaio 00:16:03.186 direct=1 00:16:03.186 bs=4096 00:16:03.186 iodepth=1 00:16:03.186 norandommap=0 00:16:03.186 numjobs=1 00:16:03.186 00:16:03.186 verify_dump=1 00:16:03.186 verify_backlog=512 00:16:03.186 verify_state_save=0 00:16:03.186 do_verify=1 00:16:03.186 verify=crc32c-intel 00:16:03.186 [job0] 00:16:03.186 filename=/dev/nvme0n1 00:16:03.186 [job1] 00:16:03.186 filename=/dev/nvme0n2 00:16:03.186 [job2] 00:16:03.186 filename=/dev/nvme0n3 00:16:03.186 [job3] 00:16:03.186 filename=/dev/nvme0n4 00:16:03.186 Could not set queue depth (nvme0n1) 00:16:03.186 Could not set queue depth (nvme0n2) 00:16:03.186 Could not set queue depth (nvme0n3) 00:16:03.186 Could not set queue depth (nvme0n4) 00:16:03.444 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.444 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.444 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.444 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.444 fio-3.35 00:16:03.444 Starting 4 threads 00:16:04.817 00:16:04.817 job0: (groupid=0, jobs=1): err= 0: pid=305154: Mon Dec 9 23:57:49 2024 00:16:04.817 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:16:04.817 slat (nsec): min=11312, max=30234, avg=24854.13, stdev=3313.27 00:16:04.817 clat (usec): min=40732, max=41919, avg=41086.04, stdev=329.43 00:16:04.817 lat (usec): min=40743, max=41946, avg=41110.90, stdev=330.34 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:04.817 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.817 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:16:04.817 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:04.817 | 99.99th=[41681] 00:16:04.817 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:16:04.817 slat (nsec): min=12129, max=58718, avg=14170.05, stdev=3472.16 00:16:04.817 clat (usec): min=130, max=330, avg=162.69, stdev=14.63 00:16:04.817 lat (usec): min=143, max=389, avg=176.86, stdev=16.35 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:16:04.817 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:16:04.817 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:16:04.817 | 99.00th=[ 215], 99.50th=[ 245], 99.90th=[ 330], 99.95th=[ 330], 00:16:04.817 | 99.99th=[ 330] 00:16:04.817 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.817 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.817 lat (usec) : 250=95.51%, 500=0.19% 00:16:04.817 lat (msec) : 50=4.30% 00:16:04.817 cpu : usr=0.39%, sys=1.16%, ctx=536, majf=0, minf=1 00:16:04.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.817 job1: (groupid=0, jobs=1): err= 0: pid=305156: Mon Dec 9 23:57:49 2024 00:16:04.817 read: IOPS=2313, BW=9255KiB/s (9477kB/s)(9264KiB/1001msec) 00:16:04.817 slat (nsec): min=8368, max=44656, avg=9097.79, stdev=1342.89 00:16:04.817 clat (usec): min=171, max=293, avg=234.65, stdev=23.55 00:16:04.817 lat (usec): min=179, max=302, avg=243.74, stdev=23.53 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 210], 00:16:04.817 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:16:04.817 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:16:04.817 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 293], 00:16:04.817 | 99.99th=[ 293] 00:16:04.817 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:04.817 slat (nsec): min=11292, max=41682, avg=12566.97, stdev=1898.80 00:16:04.817 clat (usec): min=114, max=339, avg=152.32, stdev=23.86 00:16:04.817 lat (usec): min=126, max=380, avg=164.89, stdev=24.59 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 135], 00:16:04.817 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:16:04.817 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 190], 95.00th=[ 200], 00:16:04.817 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 293], 99.95th=[ 326], 00:16:04.817 | 99.99th=[ 338] 00:16:04.817 bw ( KiB/s): min=12056, max=12056, per=76.38%, avg=12056.00, stdev= 0.00, samples=1 00:16:04.817 iops : min= 3014, max= 3014, avg=3014.00, stdev= 0.00, samples=1 00:16:04.817 lat (usec) : 250=86.63%, 500=13.37% 00:16:04.817 cpu : usr=5.80%, sys=6.90%, ctx=4876, majf=0, minf=2 00:16:04.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 issued rwts: total=2316,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.817 job2: (groupid=0, jobs=1): err= 0: pid=305157: Mon Dec 9 23:57:49 2024 00:16:04.817 read: IOPS=22, BW=91.9KiB/s (94.1kB/s)(92.0KiB/1001msec) 00:16:04.817 slat (nsec): min=11568, max=32847, avg=25075.09, stdev=4622.26 00:16:04.817 clat (usec): min=257, max=42006, avg=39281.53, stdev=8512.29 00:16:04.817 lat (usec): min=283, max=42039, avg=39306.61, stdev=8512.13 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[ 258], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:04.817 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.817 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:04.817 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:04.817 | 99.99th=[42206] 00:16:04.817 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:04.817 slat (nsec): min=12426, max=38934, avg=13459.19, stdev=1654.42 00:16:04.817 clat (usec): min=143, max=243, avg=172.18, stdev=12.69 00:16:04.817 lat (usec): min=156, max=282, avg=185.64, stdev=13.17 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:16:04.817 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:16:04.817 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:16:04.817 | 99.00th=[ 208], 99.50th=[ 225], 99.90th=[ 243], 99.95th=[ 243], 00:16:04.817 | 99.99th=[ 243] 00:16:04.817 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.817 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.817 lat (usec) : 250=95.70%, 500=0.19% 00:16:04.817 lat (msec) : 50=4.11% 00:16:04.817 cpu : usr=1.00%, sys=0.50%, ctx=536, majf=0, minf=1 00:16:04.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.817 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.817 job3: (groupid=0, jobs=1): err= 0: pid=305158: Mon Dec 9 23:57:49 2024 00:16:04.817 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:16:04.817 slat (nsec): min=11624, max=29527, avg=25177.91, stdev=3344.56 00:16:04.817 clat (usec): min=40638, max=42961, avg=41135.07, stdev=510.87 00:16:04.817 lat (usec): min=40650, max=42990, avg=41160.25, stdev=512.30 00:16:04.817 clat percentiles (usec): 00:16:04.817 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:04.817 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.817 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:04.817 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:04.817 | 99.99th=[42730] 00:16:04.817 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:16:04.818 slat (nsec): min=12386, max=51742, avg=13611.83, stdev=2460.25 00:16:04.818 clat (usec): min=134, max=369, avg=186.98, stdev=23.24 00:16:04.818 lat (usec): min=147, max=384, avg=200.59, stdev=23.73 00:16:04.818 clat percentiles (usec): 00:16:04.818 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 172], 00:16:04.818 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:16:04.818 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 227], 00:16:04.818 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 371], 99.95th=[ 371], 00:16:04.818 | 99.99th=[ 371] 00:16:04.818 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.818 lat (usec) : 250=94.76%, 500=1.12% 00:16:04.818 lat (msec) : 50=4.12% 00:16:04.818 cpu : usr=0.99%, sys=0.50%, ctx=535, majf=0, minf=1 00:16:04.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.818 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.818 00:16:04.818 Run status group 0 (all jobs): 00:16:04.818 READ: bw=9187KiB/s (9407kB/s), 87.1KiB/s-9255KiB/s (89.2kB/s-9477kB/s), io=9536KiB (9765kB), run=1001-1038msec 00:16:04.818 WRITE: bw=15.4MiB/s (16.2MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1038msec 00:16:04.818 00:16:04.818 Disk stats (read/write): 00:16:04.818 nvme0n1: ios=47/512, merge=0/0, ticks=1326/76, in_queue=1402, util=98.00% 00:16:04.818 nvme0n2: ios=1945/2048, merge=0/0, ticks=530/286, in_queue=816, util=87.62% 00:16:04.818 nvme0n3: ios=56/512, merge=0/0, ticks=1537/82, in_queue=1619, util=97.75% 00:16:04.818 nvme0n4: ios=43/512, merge=0/0, ticks=1645/92, in_queue=1737, util=99.13% 00:16:04.818 23:57:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:04.818 [global] 00:16:04.818 thread=1 00:16:04.818 invalidate=1 00:16:04.818 rw=write 00:16:04.818 time_based=1 00:16:04.818 runtime=1 00:16:04.818 ioengine=libaio 00:16:04.818 direct=1 00:16:04.818 bs=4096 00:16:04.818 iodepth=128 00:16:04.818 norandommap=0 00:16:04.818 numjobs=1 00:16:04.818 00:16:04.818 verify_dump=1 00:16:04.818 verify_backlog=512 00:16:04.818 verify_state_save=0 00:16:04.818 do_verify=1 00:16:04.818 verify=crc32c-intel 00:16:04.818 [job0] 00:16:04.818 filename=/dev/nvme0n1 00:16:04.818 [job1] 00:16:04.818 filename=/dev/nvme0n2 00:16:04.818 [job2] 00:16:04.818 filename=/dev/nvme0n3 00:16:04.818 [job3] 00:16:04.818 filename=/dev/nvme0n4 00:16:04.818 Could not set queue depth (nvme0n1) 00:16:04.818 Could not set queue depth (nvme0n2) 00:16:04.818 Could not set queue depth (nvme0n3) 00:16:04.818 Could not set queue depth (nvme0n4) 00:16:05.383 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.383 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.383 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.383 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:05.383 fio-3.35 00:16:05.383 Starting 4 threads 00:16:06.756 00:16:06.756 job0: (groupid=0, jobs=1): err= 0: pid=305580: Mon Dec 9 23:57:50 2024 00:16:06.756 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:06.756 slat (usec): min=2, max=11549, avg=101.04, stdev=551.40 00:16:06.756 clat (usec): min=6259, max=65043, avg=13265.61, stdev=8354.34 00:16:06.756 lat (usec): min=6269, max=65060, avg=13366.65, stdev=8402.06 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8356], 00:16:06.756 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11207], 00:16:06.756 | 70.00th=[12518], 80.00th=[19006], 90.00th=[20841], 95.00th=[23200], 00:16:06.756 | 99.00th=[59507], 99.50th=[62653], 99.90th=[62653], 99.95th=[65274], 00:16:06.756 | 99.99th=[65274] 00:16:06.756 write: IOPS=5495, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1003msec); 0 zone resets 00:16:06.756 slat (usec): min=2, max=5136, avg=79.18, stdev=425.61 00:16:06.756 clat (usec): min=483, max=21468, avg=10706.53, stdev=3057.93 00:16:06.756 lat (usec): min=3487, max=21485, avg=10785.71, stdev=3071.67 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8160], 00:16:06.756 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10683], 00:16:06.756 | 70.00th=[11076], 80.00th=[14484], 90.00th=[15270], 95.00th=[16581], 00:16:06.756 | 99.00th=[18744], 99.50th=[20317], 99.90th=[21365], 99.95th=[21365], 00:16:06.756 | 99.99th=[21365] 00:16:06.756 bw ( KiB/s): min=18496, max=24576, per=27.96%, avg=21536.00, stdev=4299.21, samples=2 00:16:06.756 iops : min= 4624, max= 6144, avg=5384.00, stdev=1074.80, samples=2 00:16:06.756 lat (usec) : 500=0.01% 00:16:06.756 lat (msec) : 4=0.26%, 10=48.23%, 20=43.17%, 50=7.43%, 100=0.89% 00:16:06.756 cpu : usr=6.89%, sys=8.28%, ctx=439, majf=0, minf=1 00:16:06.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:06.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.756 issued rwts: total=5120,5512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.756 job1: (groupid=0, jobs=1): err= 0: pid=305581: Mon Dec 9 23:57:50 2024 00:16:06.756 read: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1002msec) 00:16:06.756 slat (usec): min=2, max=10195, avg=77.99, stdev=496.04 00:16:06.756 clat (usec): min=1762, max=30734, avg=11302.56, stdev=3088.19 00:16:06.756 lat (usec): min=1768, max=30744, avg=11380.55, stdev=3121.62 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 5997], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9634], 00:16:06.756 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10814], 60.00th=[11076], 00:16:06.756 | 70.00th=[11338], 80.00th=[12649], 90.00th=[15533], 95.00th=[17695], 00:16:06.756 | 99.00th=[21627], 99.50th=[22414], 99.90th=[26084], 99.95th=[26084], 00:16:06.756 | 99.99th=[30802] 00:16:06.756 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:06.756 slat (usec): min=3, max=51889, avg=128.92, stdev=1658.90 00:16:06.756 clat (usec): min=1802, max=207623, avg=14042.94, stdev=17563.65 00:16:06.756 lat (usec): min=1814, max=207637, avg=14171.86, stdev=17799.35 00:16:06.756 clat percentiles (msec): 00:16:06.756 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:16:06.756 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:16:06.756 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 51], 00:16:06.756 | 99.00th=[ 99], 99.50th=[ 150], 99.90th=[ 207], 99.95th=[ 209], 00:16:06.756 | 99.99th=[ 209] 00:16:06.756 bw ( KiB/s): min=12328, max=24536, per=23.93%, avg=18432.00, stdev=8632.36, samples=2 00:16:06.756 iops : min= 3082, max= 6134, avg=4608.00, stdev=2158.09, samples=2 00:16:06.756 lat (msec) : 2=0.27%, 4=0.15%, 10=38.52%, 20=56.69%, 50=1.50% 00:16:06.756 lat (msec) : 100=2.51%, 250=0.36% 00:16:06.756 cpu : usr=4.90%, sys=7.49%, ctx=352, majf=0, minf=1 00:16:06.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:06.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.756 issued rwts: total=4249,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.756 job2: (groupid=0, jobs=1): err= 0: pid=305582: Mon Dec 9 23:57:50 2024 00:16:06.756 read: IOPS=4775, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1004msec) 00:16:06.756 slat (nsec): min=1837, max=11538k, avg=102701.40, stdev=707320.81 00:16:06.756 clat (usec): min=2890, max=45934, avg=12888.33, stdev=5523.21 00:16:06.756 lat (usec): min=2900, max=45944, avg=12991.03, stdev=5566.87 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 4490], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[10290], 00:16:06.756 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[12387], 00:16:06.756 | 70.00th=[13042], 80.00th=[14222], 90.00th=[17957], 95.00th=[21627], 00:16:06.756 | 99.00th=[39584], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:16:06.756 | 99.99th=[45876] 00:16:06.756 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:06.756 slat (usec): min=2, max=9795, avg=84.17, stdev=508.33 00:16:06.756 clat (usec): min=669, max=53994, avg=12749.75, stdev=8372.04 00:16:06.756 lat (usec): min=682, max=54000, avg=12833.91, stdev=8422.50 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 1549], 5.00th=[ 4228], 10.00th=[ 5538], 20.00th=[ 7177], 00:16:06.756 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:16:06.756 | 70.00th=[12780], 80.00th=[16909], 90.00th=[20055], 95.00th=[31327], 00:16:06.756 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:16:06.756 | 99.99th=[53740] 00:16:06.756 bw ( KiB/s): min=19952, max=21008, per=26.59%, avg=20480.00, stdev=746.70, samples=2 00:16:06.756 iops : min= 4988, max= 5252, avg=5120.00, stdev=186.68, samples=2 00:16:06.756 lat (usec) : 750=0.02% 00:16:06.756 lat (msec) : 2=0.54%, 4=1.44%, 10=25.91%, 20=62.50%, 50=8.88% 00:16:06.756 lat (msec) : 100=0.71% 00:16:06.756 cpu : usr=4.39%, sys=8.18%, ctx=443, majf=0, minf=2 00:16:06.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:06.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.756 issued rwts: total=4795,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.756 job3: (groupid=0, jobs=1): err= 0: pid=305583: Mon Dec 9 23:57:50 2024 00:16:06.756 read: IOPS=3744, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec) 00:16:06.756 slat (nsec): min=1825, max=7277.6k, avg=113089.64, stdev=590637.63 00:16:06.756 clat (usec): min=2948, max=33784, avg=14475.98, stdev=5590.70 00:16:06.756 lat (usec): min=6427, max=33793, avg=14589.07, stdev=5621.02 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10945], 00:16:06.756 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12518], 60.00th=[13304], 00:16:06.756 | 70.00th=[13829], 80.00th=[17171], 90.00th=[24773], 95.00th=[27395], 00:16:06.756 | 99.00th=[30540], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:16:06.756 | 99.99th=[33817] 00:16:06.756 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:06.756 slat (usec): min=3, max=17512, avg=130.61, stdev=668.14 00:16:06.756 clat (usec): min=5830, max=49224, avg=17704.01, stdev=8468.96 00:16:06.756 lat (usec): min=5840, max=49229, avg=17834.63, stdev=8505.32 00:16:06.756 clat percentiles (usec): 00:16:06.756 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[10814], 20.00th=[11338], 00:16:06.756 | 30.00th=[12256], 40.00th=[12911], 50.00th=[14091], 60.00th=[18220], 00:16:06.756 | 70.00th=[20055], 80.00th=[21103], 90.00th=[29230], 95.00th=[36963], 00:16:06.756 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:16:06.756 | 99.99th=[49021] 00:16:06.756 bw ( KiB/s): min=16384, max=16384, per=21.27%, avg=16384.00, stdev= 0.00, samples=2 00:16:06.756 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:06.756 lat (msec) : 4=0.01%, 10=7.33%, 20=68.64%, 50=24.01% 00:16:06.756 cpu : usr=3.69%, sys=5.68%, ctx=464, majf=0, minf=1 00:16:06.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:06.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.756 issued rwts: total=3759,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.756 00:16:06.756 Run status group 0 (all jobs): 00:16:06.756 READ: bw=69.7MiB/s (73.1MB/s), 14.6MiB/s-19.9MiB/s (15.3MB/s-20.9MB/s), io=70.0MiB (73.4MB), run=1002-1004msec 00:16:06.756 WRITE: bw=75.2MiB/s (78.9MB/s), 15.9MiB/s-21.5MiB/s (16.7MB/s-22.5MB/s), io=75.5MiB (79.2MB), run=1002-1004msec 00:16:06.756 00:16:06.756 Disk stats (read/write): 00:16:06.756 nvme0n1: ios=4660/4775, merge=0/0, ticks=21762/16571, in_queue=38333, util=99.90% 00:16:06.756 nvme0n2: ios=3143/3584, merge=0/0, ticks=21382/20500, in_queue=41882, util=96.93% 00:16:06.756 nvme0n3: ios=3615/4037, merge=0/0, ticks=39657/46321, in_queue=85978, util=96.39% 00:16:06.756 nvme0n4: ios=3393/3584, merge=0/0, ticks=16821/19477, in_queue=36298, util=98.49% 00:16:06.756 23:57:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:06.756 [global] 00:16:06.756 thread=1 00:16:06.756 invalidate=1 00:16:06.756 rw=randwrite 00:16:06.756 time_based=1 00:16:06.756 runtime=1 00:16:06.756 ioengine=libaio 00:16:06.756 direct=1 00:16:06.756 bs=4096 00:16:06.756 iodepth=128 00:16:06.756 norandommap=0 00:16:06.756 numjobs=1 00:16:06.756 00:16:06.756 verify_dump=1 00:16:06.756 verify_backlog=512 00:16:06.756 verify_state_save=0 00:16:06.756 do_verify=1 00:16:06.756 verify=crc32c-intel 00:16:06.756 [job0] 00:16:06.756 filename=/dev/nvme0n1 00:16:06.756 [job1] 00:16:06.756 filename=/dev/nvme0n2 00:16:06.756 [job2] 00:16:06.756 filename=/dev/nvme0n3 00:16:06.756 [job3] 00:16:06.756 filename=/dev/nvme0n4 00:16:06.756 Could not set queue depth (nvme0n1) 00:16:06.757 Could not set queue depth (nvme0n2) 00:16:06.757 Could not set queue depth (nvme0n3) 00:16:06.757 Could not set queue depth (nvme0n4) 00:16:06.757 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.757 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.757 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.757 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.757 fio-3.35 00:16:06.757 Starting 4 threads 00:16:08.131 00:16:08.131 job0: (groupid=0, jobs=1): err= 0: pid=306000: Mon Dec 9 23:57:52 2024 00:16:08.131 read: IOPS=4765, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1007msec) 00:16:08.131 slat (nsec): min=1988, max=7972.4k, avg=96750.84, stdev=534003.30 00:16:08.131 clat (usec): min=514, max=23753, avg=12705.84, stdev=2297.60 00:16:08.131 lat (usec): min=6734, max=23761, avg=12802.59, stdev=2314.89 00:16:08.131 clat percentiles (usec): 00:16:08.131 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10945], 00:16:08.131 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12125], 60.00th=[12518], 00:16:08.131 | 70.00th=[13173], 80.00th=[14615], 90.00th=[15795], 95.00th=[17433], 00:16:08.131 | 99.00th=[19792], 99.50th=[20055], 99.90th=[23725], 99.95th=[23725], 00:16:08.131 | 99.99th=[23725] 00:16:08.131 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:08.131 slat (usec): min=2, max=9338, avg=98.63, stdev=540.87 00:16:08.131 clat (usec): min=1982, max=33353, avg=12991.10, stdev=4609.98 00:16:08.131 lat (usec): min=2014, max=33359, avg=13089.72, stdev=4646.85 00:16:08.131 clat percentiles (usec): 00:16:08.131 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10945], 00:16:08.131 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:16:08.131 | 70.00th=[11994], 80.00th=[13304], 90.00th=[17957], 95.00th=[25822], 00:16:08.131 | 99.00th=[30802], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:16:08.131 | 99.99th=[33424] 00:16:08.131 bw ( KiB/s): min=19224, max=21736, per=25.00%, avg=20480.00, stdev=1776.25, samples=2 00:16:08.131 iops : min= 4806, max= 5434, avg=5120.00, stdev=444.06, samples=2 00:16:08.131 lat (usec) : 750=0.01% 00:16:08.131 lat (msec) : 2=0.01%, 4=0.01%, 10=7.55%, 20=87.80%, 50=4.62% 00:16:08.131 cpu : usr=4.17%, sys=6.66%, ctx=473, majf=0, minf=1 00:16:08.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:08.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.131 issued rwts: total=4799,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.131 job1: (groupid=0, jobs=1): err= 0: pid=306001: Mon Dec 9 23:57:52 2024 00:16:08.131 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:16:08.131 slat (usec): min=2, max=10154, avg=88.94, stdev=625.79 00:16:08.131 clat (usec): min=3319, max=21274, avg=11116.69, stdev=2702.81 00:16:08.131 lat (usec): min=3326, max=21284, avg=11205.63, stdev=2745.43 00:16:08.131 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 4555], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9372], 00:16:08.132 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10814], 60.00th=[11076], 00:16:08.132 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15401], 95.00th=[16909], 00:16:08.132 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:16:08.132 | 99.99th=[21365] 00:16:08.132 write: IOPS=6353, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1006msec); 0 zone resets 00:16:08.132 slat (usec): min=2, max=8640, avg=63.53, stdev=302.82 00:16:08.132 clat (usec): min=1915, max=20722, avg=9275.80, stdev=2091.74 00:16:08.132 lat (usec): min=1966, max=20726, avg=9339.33, stdev=2116.66 00:16:08.132 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 5997], 20.00th=[ 8586], 00:16:08.132 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:16:08.132 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11207], 95.00th=[11338], 00:16:08.132 | 99.00th=[14484], 99.50th=[16450], 99.90th=[19792], 99.95th=[20055], 00:16:08.132 | 99.99th=[20841] 00:16:08.132 bw ( KiB/s): min=23576, max=26544, per=30.59%, avg=25060.00, stdev=2098.69, samples=2 00:16:08.132 iops : min= 5894, max= 6636, avg=6265.00, stdev=524.67, samples=2 00:16:08.132 lat (msec) : 2=0.02%, 4=1.46%, 10=52.72%, 20=45.54%, 50=0.26% 00:16:08.132 cpu : usr=6.07%, sys=7.56%, ctx=756, majf=0, minf=1 00:16:08.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:08.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.132 issued rwts: total=6144,6392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.132 job2: (groupid=0, jobs=1): err= 0: pid=306002: Mon Dec 9 23:57:52 2024 00:16:08.132 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:16:08.132 slat (usec): min=2, max=3303, avg=94.26, stdev=452.28 00:16:08.132 clat (usec): min=8280, max=15160, avg=12306.35, stdev=1188.19 00:16:08.132 lat (usec): min=9252, max=17543, avg=12400.61, stdev=1140.08 00:16:08.132 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:16:08.132 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:16:08.132 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:16:08.132 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:16:08.132 | 99.99th=[15139] 00:16:08.132 write: IOPS=5286, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1001msec); 0 zone resets 00:16:08.132 slat (usec): min=2, max=3351, avg=91.33, stdev=411.88 00:16:08.132 clat (usec): min=346, max=15131, avg=11984.96, stdev=1494.46 00:16:08.132 lat (usec): min=2627, max=15137, avg=12076.28, stdev=1454.67 00:16:08.132 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 6063], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[10945], 00:16:08.132 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[12649], 00:16:08.132 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:16:08.132 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:16:08.132 | 99.99th=[15139] 00:16:08.132 bw ( KiB/s): min=21648, max=21648, per=26.42%, avg=21648.00, stdev= 0.00, samples=1 00:16:08.132 iops : min= 5412, max= 5412, avg=5412.00, stdev= 0.00, samples=1 00:16:08.132 lat (usec) : 500=0.01% 00:16:08.132 lat (msec) : 4=0.31%, 10=3.88%, 20=95.80% 00:16:08.132 cpu : usr=5.10%, sys=5.60%, ctx=605, majf=0, minf=1 00:16:08.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:08.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.132 issued rwts: total=5120,5292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.132 job3: (groupid=0, jobs=1): err= 0: pid=306004: Mon Dec 9 23:57:52 2024 00:16:08.132 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:16:08.132 slat (nsec): min=1765, max=7871.8k, avg=152735.10, stdev=804335.51 00:16:08.132 clat (usec): min=6138, max=49471, avg=18559.11, stdev=11568.45 00:16:08.132 lat (usec): min=6156, max=49482, avg=18711.84, stdev=11647.90 00:16:08.132 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[12125], 00:16:08.132 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[14222], 00:16:08.132 | 70.00th=[15926], 80.00th=[22152], 90.00th=[42206], 95.00th=[48497], 00:16:08.132 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:16:08.132 | 99.99th=[49546] 00:16:08.132 write: IOPS=3807, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1004msec); 0 zone resets 00:16:08.132 slat (usec): min=2, max=10975, avg=111.19, stdev=620.38 00:16:08.132 clat (usec): min=437, max=43017, avg=15768.17, stdev=6798.34 00:16:08.132 lat (usec): min=5196, max=43022, avg=15879.36, stdev=6817.86 00:16:08.132 clat percentiles (usec): 00:16:08.132 | 1.00th=[ 7046], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[12256], 00:16:08.132 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13829], 00:16:08.132 | 70.00th=[15139], 80.00th=[15926], 90.00th=[27132], 95.00th=[32637], 00:16:08.132 | 99.00th=[38536], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:16:08.132 | 99.99th=[43254] 00:16:08.132 bw ( KiB/s): min=12656, max=16904, per=18.04%, avg=14780.00, stdev=3003.79, samples=2 00:16:08.132 iops : min= 3164, max= 4226, avg=3695.00, stdev=750.95, samples=2 00:16:08.132 lat (usec) : 500=0.01% 00:16:08.132 lat (msec) : 10=7.61%, 20=72.01%, 50=20.36% 00:16:08.132 cpu : usr=3.39%, sys=6.18%, ctx=356, majf=0, minf=1 00:16:08.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:08.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.132 issued rwts: total=3584,3823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.132 00:16:08.132 Run status group 0 (all jobs): 00:16:08.132 READ: bw=76.2MiB/s (79.9MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.0MB/s), io=76.7MiB (80.5MB), run=1001-1007msec 00:16:08.132 WRITE: bw=80.0MiB/s (83.9MB/s), 14.9MiB/s-24.8MiB/s (15.6MB/s-26.0MB/s), io=80.6MiB (84.5MB), run=1001-1007msec 00:16:08.132 00:16:08.132 Disk stats (read/write): 00:16:08.132 nvme0n1: ios=3921/4096, merge=0/0, ticks=18283/20367, in_queue=38650, util=83.97% 00:16:08.132 nvme0n2: ios=5120/5239, merge=0/0, ticks=53727/47065, in_queue=100792, util=84.84% 00:16:08.132 nvme0n3: ios=4096/4591, merge=0/0, ticks=12442/12977, in_queue=25419, util=88.18% 00:16:08.132 nvme0n4: ios=2832/3072, merge=0/0, ticks=21160/16928, in_queue=38088, util=100.00% 00:16:08.132 23:57:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:08.132 23:57:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=306273 00:16:08.132 23:57:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:08.132 23:57:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:08.132 [global] 00:16:08.132 thread=1 00:16:08.132 invalidate=1 00:16:08.132 rw=read 00:16:08.132 time_based=1 00:16:08.132 runtime=10 00:16:08.132 ioengine=libaio 00:16:08.132 direct=1 00:16:08.132 bs=4096 00:16:08.132 iodepth=1 00:16:08.132 norandommap=1 00:16:08.132 numjobs=1 00:16:08.132 00:16:08.132 [job0] 00:16:08.132 filename=/dev/nvme0n1 00:16:08.132 [job1] 00:16:08.132 filename=/dev/nvme0n2 00:16:08.132 [job2] 00:16:08.132 filename=/dev/nvme0n3 00:16:08.132 [job3] 00:16:08.132 filename=/dev/nvme0n4 00:16:08.132 Could not set queue depth (nvme0n1) 00:16:08.132 Could not set queue depth (nvme0n2) 00:16:08.132 Could not set queue depth (nvme0n3) 00:16:08.132 Could not set queue depth (nvme0n4) 00:16:08.698 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.698 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.698 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.698 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.698 fio-3.35 00:16:08.698 Starting 4 threads 00:16:11.229 23:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:11.229 23:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:11.229 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=270336, buflen=4096 00:16:11.230 fio: pid=306429, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:11.488 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44032000, buflen=4096 00:16:11.488 fio: pid=306427, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:11.488 23:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:11.488 23:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:11.747 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:11.747 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:11.747 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=356352, buflen=4096 00:16:11.747 fio: pid=306425, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.006 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.006 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:12.006 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=39993344, buflen=4096 00:16:12.006 fio: pid=306426, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:12.006 00:16:12.006 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306425: Mon Dec 9 23:57:56 2024 00:16:12.006 read: IOPS=29, BW=115KiB/s (118kB/s)(348KiB/3021msec) 00:16:12.006 slat (usec): min=9, max=1639, avg=39.50, stdev=172.63 00:16:12.006 clat (usec): min=269, max=41972, avg=34443.93, stdev=15001.24 00:16:12.006 lat (usec): min=287, max=42989, avg=34483.76, stdev=15009.74 00:16:12.006 clat percentiles (usec): 00:16:12.006 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 412], 20.00th=[40633], 00:16:12.006 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:12.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:12.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.006 | 99.99th=[42206] 00:16:12.006 bw ( KiB/s): min= 96, max= 144, per=0.45%, avg=116.80, stdev=19.27, samples=5 00:16:12.006 iops : min= 24, max= 36, avg=29.20, stdev= 4.82, samples=5 00:16:12.006 lat (usec) : 500=13.64%, 750=2.27% 00:16:12.006 lat (msec) : 50=82.95% 00:16:12.006 cpu : usr=0.10%, sys=0.00%, ctx=89, majf=0, minf=1 00:16:12.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.006 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.006 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.006 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=306426: Mon Dec 9 23:57:56 2024 00:16:12.006 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(38.1MiB/3220msec) 00:16:12.006 slat (usec): min=8, max=31646, avg=16.15, stdev=387.58 00:16:12.006 clat (usec): min=164, max=42002, avg=312.08, stdev=2072.28 00:16:12.006 lat (usec): min=173, max=72927, avg=327.50, stdev=2223.95 00:16:12.006 clat percentiles (usec): 00:16:12.006 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 196], 00:16:12.006 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:16:12.006 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 229], 00:16:12.006 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[41157], 99.95th=[41681], 00:16:12.006 | 99.99th=[42206] 00:16:12.006 bw ( KiB/s): min= 409, max=18496, per=50.67%, avg=13009.50, stdev=8514.80, samples=6 00:16:12.006 iops : min= 102, max= 4624, avg=3252.33, stdev=2128.77, samples=6 00:16:12.006 lat (usec) : 250=98.69%, 500=1.02%, 750=0.02% 00:16:12.006 lat (msec) : 50=0.26% 00:16:12.006 cpu : usr=1.18%, sys=3.76%, ctx=9769, majf=0, minf=2 00:16:12.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.006 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.006 issued rwts: total=9765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.006 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306427: Mon Dec 9 23:57:56 2024 00:16:12.006 read: IOPS=3847, BW=15.0MiB/s (15.8MB/s)(42.0MiB/2794msec) 00:16:12.006 slat (usec): min=8, max=17548, avg=12.54, stdev=236.38 00:16:12.006 clat (usec): min=172, max=447, avg=244.11, stdev=18.41 00:16:12.006 lat (usec): min=181, max=17985, avg=256.65, stdev=239.79 00:16:12.006 clat percentiles (usec): 00:16:12.006 | 1.00th=[ 184], 5.00th=[ 204], 10.00th=[ 227], 20.00th=[ 235], 00:16:12.006 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:16:12.006 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:16:12.006 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 347], 99.95th=[ 400], 00:16:12.006 | 99.99th=[ 441] 00:16:12.006 bw ( KiB/s): min=15464, max=15528, per=60.42%, avg=15513.60, stdev=27.94, samples=5 00:16:12.006 iops : min= 3866, max= 3882, avg=3878.40, stdev= 6.99, samples=5 00:16:12.006 lat (usec) : 250=61.12%, 500=38.87% 00:16:12.006 cpu : usr=1.65%, sys=4.33%, ctx=10753, majf=0, minf=2 00:16:12.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.007 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.007 issued rwts: total=10751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.007 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=306429: Mon Dec 9 23:57:56 2024 00:16:12.007 read: IOPS=25, BW=101KiB/s (104kB/s)(264KiB/2607msec) 00:16:12.007 slat (nsec): min=11331, max=32719, avg=24430.09, stdev=3335.92 00:16:12.007 clat (usec): min=255, max=42032, avg=39151.40, stdev=8527.31 00:16:12.007 lat (usec): min=280, max=42057, avg=39175.84, stdev=8526.81 00:16:12.007 clat percentiles (usec): 00:16:12.007 | 1.00th=[ 255], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:12.007 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:12.007 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:12.007 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:12.007 | 99.99th=[42206] 00:16:12.007 bw ( KiB/s): min= 96, max= 104, per=0.39%, avg=100.80, stdev= 4.38, samples=5 00:16:12.007 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:16:12.007 lat (usec) : 500=4.48% 00:16:12.007 lat (msec) : 50=94.03% 00:16:12.007 cpu : usr=0.04%, sys=0.04%, ctx=70, majf=0, minf=2 00:16:12.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.007 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.007 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.007 00:16:12.007 Run status group 0 (all jobs): 00:16:12.007 READ: bw=25.1MiB/s (26.3MB/s), 101KiB/s-15.0MiB/s (104kB/s-15.8MB/s), io=80.7MiB (84.7MB), run=2607-3220msec 00:16:12.007 00:16:12.007 Disk stats (read/write): 00:16:12.007 nvme0n1: ios=81/0, merge=0/0, ticks=2752/0, in_queue=2752, util=92.92% 00:16:12.007 nvme0n2: ios=9760/0, merge=0/0, ticks=2833/0, in_queue=2833, util=93.12% 00:16:12.007 nvme0n3: ios=10750/0, merge=0/0, ticks=2564/0, in_queue=2564, util=94.79% 00:16:12.007 nvme0n4: ios=93/0, merge=0/0, ticks=2647/0, in_queue=2647, util=99.92% 00:16:12.265 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.265 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:12.265 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.265 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:12.524 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.524 23:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:12.782 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.782 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 306273 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:13.041 nvmf hotplug test: fio failed as expected 00:16:13.041 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:13.300 rmmod nvme_tcp 00:16:13.300 rmmod nvme_fabrics 00:16:13.300 rmmod nvme_keyring 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 303197 ']' 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 303197 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 303197 ']' 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 303197 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.300 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 303197 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 303197' 00:16:13.560 killing process with pid 303197 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 303197 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 303197 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.560 23:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:16.104 00:16:16.104 real 0m28.741s 00:16:16.104 user 2m4.694s 00:16:16.104 sys 0m10.730s 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.104 ************************************ 00:16:16.104 END TEST nvmf_fio_target 00:16:16.104 ************************************ 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:16.104 ************************************ 00:16:16.104 START TEST nvmf_bdevio 00:16:16.104 ************************************ 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.104 * Looking for test storage... 00:16:16.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.104 --rc genhtml_branch_coverage=1 00:16:16.104 --rc genhtml_function_coverage=1 00:16:16.104 --rc genhtml_legend=1 00:16:16.104 --rc geninfo_all_blocks=1 00:16:16.104 --rc geninfo_unexecuted_blocks=1 00:16:16.104 00:16:16.104 ' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.104 --rc genhtml_branch_coverage=1 00:16:16.104 --rc genhtml_function_coverage=1 00:16:16.104 --rc genhtml_legend=1 00:16:16.104 --rc geninfo_all_blocks=1 00:16:16.104 --rc geninfo_unexecuted_blocks=1 00:16:16.104 00:16:16.104 ' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.104 --rc genhtml_branch_coverage=1 00:16:16.104 --rc genhtml_function_coverage=1 00:16:16.104 --rc genhtml_legend=1 00:16:16.104 --rc geninfo_all_blocks=1 00:16:16.104 --rc geninfo_unexecuted_blocks=1 00:16:16.104 00:16:16.104 ' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:16.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.104 --rc genhtml_branch_coverage=1 00:16:16.104 --rc genhtml_function_coverage=1 00:16:16.104 --rc genhtml_legend=1 00:16:16.104 --rc geninfo_all_blocks=1 00:16:16.104 --rc geninfo_unexecuted_blocks=1 00:16:16.104 00:16:16.104 ' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.104 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:16:16.105 23:58:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:24.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:24.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:24.238 Found net devices under 0000:af:00.0: cvl_0_0 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.238 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:24.239 Found net devices under 0000:af:00.1: cvl_0_1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:24.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:16:24.239 00:16:24.239 --- 10.0.0.2 ping statistics --- 00:16:24.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.239 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:24.239 00:16:24.239 --- 10.0.0.1 ping statistics --- 00:16:24.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.239 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=311112 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 311112 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 311112 ']' 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.239 23:58:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.239 [2024-12-09 23:58:07.805617] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:16:24.239 [2024-12-09 23:58:07.805670] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.239 [2024-12-09 23:58:07.903759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.239 [2024-12-09 23:58:07.945207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.239 [2024-12-09 23:58:07.945243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.239 [2024-12-09 23:58:07.945253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.239 [2024-12-09 23:58:07.945261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.239 [2024-12-09 23:58:07.945268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.239 [2024-12-09 23:58:07.947042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:24.239 [2024-12-09 23:58:07.947152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:24.239 [2024-12-09 23:58:07.947258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.239 [2024-12-09 23:58:07.947260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.239 [2024-12-09 23:58:08.694035] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.239 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.497 Malloc0 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.498 [2024-12-09 23:58:08.766450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:24.498 { 00:16:24.498 "params": { 00:16:24.498 "name": "Nvme$subsystem", 00:16:24.498 "trtype": "$TEST_TRANSPORT", 00:16:24.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.498 "adrfam": "ipv4", 00:16:24.498 "trsvcid": "$NVMF_PORT", 00:16:24.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.498 "hdgst": ${hdgst:-false}, 00:16:24.498 "ddgst": ${ddgst:-false} 00:16:24.498 }, 00:16:24.498 "method": "bdev_nvme_attach_controller" 00:16:24.498 } 00:16:24.498 EOF 00:16:24.498 )") 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:16:24.498 23:58:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:24.498 "params": { 00:16:24.498 "name": "Nvme1", 00:16:24.498 "trtype": "tcp", 00:16:24.498 "traddr": "10.0.0.2", 00:16:24.498 "adrfam": "ipv4", 00:16:24.498 "trsvcid": "4420", 00:16:24.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.498 "hdgst": false, 00:16:24.498 "ddgst": false 00:16:24.498 }, 00:16:24.498 "method": "bdev_nvme_attach_controller" 00:16:24.498 }' 00:16:24.498 [2024-12-09 23:58:08.820419] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:16:24.498 [2024-12-09 23:58:08.820465] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311219 ] 00:16:24.498 [2024-12-09 23:58:08.912620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.498 [2024-12-09 23:58:08.954478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.498 [2024-12-09 23:58:08.954585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.498 [2024-12-09 23:58:08.954586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.063 I/O targets: 00:16:25.063 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:25.063 00:16:25.063 00:16:25.063 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.063 http://cunit.sourceforge.net/ 00:16:25.063 00:16:25.063 00:16:25.063 Suite: bdevio tests on: Nvme1n1 00:16:25.063 Test: blockdev write read block ...passed 00:16:25.063 Test: blockdev write zeroes read block ...passed 00:16:25.063 Test: blockdev write zeroes read no split ...passed 00:16:25.063 Test: blockdev write zeroes read split ...passed 00:16:25.063 Test: blockdev write zeroes read split partial ...passed 00:16:25.063 Test: blockdev reset ...[2024-12-09 23:58:09.473519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:25.063 [2024-12-09 23:58:09.473587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb74590 (9): Bad file descriptor 00:16:25.063 [2024-12-09 23:58:09.484672] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:25.063 passed 00:16:25.063 Test: blockdev write read 8 blocks ...passed 00:16:25.063 Test: blockdev write read size > 128k ...passed 00:16:25.063 Test: blockdev write read invalid size ...passed 00:16:25.321 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:25.321 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:25.321 Test: blockdev write read max offset ...passed 00:16:25.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:25.321 Test: blockdev writev readv 8 blocks ...passed 00:16:25.321 Test: blockdev writev readv 30 x 1block ...passed 00:16:25.321 Test: blockdev writev readv block ...passed 00:16:25.321 Test: blockdev writev readv size > 128k ...passed 00:16:25.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:25.321 Test: blockdev comparev and writev ...[2024-12-09 23:58:09.694573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.321 [2024-12-09 23:58:09.694610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.321 [2024-12-09 23:58:09.694626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.321 [2024-12-09 23:58:09.694637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:25.321 [2024-12-09 23:58:09.694877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.321 [2024-12-09 23:58:09.694894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:25.321 [2024-12-09 23:58:09.694908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.321 [2024-12-09 23:58:09.694917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:25.321 [2024-12-09 23:58:09.695148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.322 [2024-12-09 23:58:09.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.695175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.322 [2024-12-09 23:58:09.695184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.322 [2024-12-09 23:58:09.695434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.695448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.322 [2024-12-09 23:58:09.695457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:25.322 passed 00:16:25.322 Test: blockdev nvme passthru rw ...passed 00:16:25.322 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:58:09.777122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.322 [2024-12-09 23:58:09.777140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.777251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.322 [2024-12-09 23:58:09.777263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.777371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.322 [2024-12-09 23:58:09.777383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:25.322 [2024-12-09 23:58:09.777491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.322 [2024-12-09 23:58:09.777504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:25.322 passed 00:16:25.322 Test: blockdev nvme admin passthru ...passed 00:16:25.579 Test: blockdev copy ...passed 00:16:25.579 00:16:25.579 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.579 suites 1 1 n/a 0 0 00:16:25.579 tests 23 23 23 0 0 00:16:25.579 asserts 152 152 152 0 n/a 00:16:25.579 00:16:25.579 Elapsed time = 1.126 seconds 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.579 23:58:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.579 rmmod nvme_tcp 00:16:25.579 rmmod nvme_fabrics 00:16:25.579 rmmod nvme_keyring 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 311112 ']' 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 311112 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 311112 ']' 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 311112 00:16:25.579 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311112 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311112' 00:16:25.839 killing process with pid 311112 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 311112 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 311112 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.839 23:58:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:28.383 00:16:28.383 real 0m12.222s 00:16:28.383 user 0m13.941s 00:16:28.383 sys 0m6.301s 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:28.383 ************************************ 00:16:28.383 END TEST nvmf_bdevio 00:16:28.383 ************************************ 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:28.383 00:16:28.383 real 5m3.819s 00:16:28.383 user 11m3.755s 00:16:28.383 sys 2m1.798s 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:28.383 ************************************ 00:16:28.383 END TEST nvmf_target_core 00:16:28.383 ************************************ 00:16:28.383 23:58:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:28.383 23:58:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.383 23:58:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.383 23:58:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.383 ************************************ 00:16:28.383 START TEST nvmf_target_extra 00:16:28.383 ************************************ 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:28.383 * Looking for test storage... 00:16:28.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:16:28.383 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.384 --rc genhtml_branch_coverage=1 00:16:28.384 --rc genhtml_function_coverage=1 00:16:28.384 --rc genhtml_legend=1 00:16:28.384 --rc geninfo_all_blocks=1 00:16:28.384 --rc geninfo_unexecuted_blocks=1 00:16:28.384 00:16:28.384 ' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.384 --rc genhtml_branch_coverage=1 00:16:28.384 --rc genhtml_function_coverage=1 00:16:28.384 --rc genhtml_legend=1 00:16:28.384 --rc geninfo_all_blocks=1 00:16:28.384 --rc geninfo_unexecuted_blocks=1 00:16:28.384 00:16:28.384 ' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.384 --rc genhtml_branch_coverage=1 00:16:28.384 --rc genhtml_function_coverage=1 00:16:28.384 --rc genhtml_legend=1 00:16:28.384 --rc geninfo_all_blocks=1 00:16:28.384 --rc geninfo_unexecuted_blocks=1 00:16:28.384 00:16:28.384 ' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.384 --rc genhtml_branch_coverage=1 00:16:28.384 --rc genhtml_function_coverage=1 00:16:28.384 --rc genhtml_legend=1 00:16:28.384 --rc geninfo_all_blocks=1 00:16:28.384 --rc geninfo_unexecuted_blocks=1 00:16:28.384 00:16:28.384 ' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.384 ************************************ 00:16:28.384 START TEST nvmf_example 00:16:28.384 ************************************ 00:16:28.384 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:28.646 * Looking for test storage... 00:16:28.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.646 --rc genhtml_branch_coverage=1 00:16:28.646 --rc genhtml_function_coverage=1 00:16:28.646 --rc genhtml_legend=1 00:16:28.646 --rc geninfo_all_blocks=1 00:16:28.646 --rc geninfo_unexecuted_blocks=1 00:16:28.646 00:16:28.646 ' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.646 --rc genhtml_branch_coverage=1 00:16:28.646 --rc genhtml_function_coverage=1 00:16:28.646 --rc genhtml_legend=1 00:16:28.646 --rc geninfo_all_blocks=1 00:16:28.646 --rc geninfo_unexecuted_blocks=1 00:16:28.646 00:16:28.646 ' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.646 --rc genhtml_branch_coverage=1 00:16:28.646 --rc genhtml_function_coverage=1 00:16:28.646 --rc genhtml_legend=1 00:16:28.646 --rc geninfo_all_blocks=1 00:16:28.646 --rc geninfo_unexecuted_blocks=1 00:16:28.646 00:16:28.646 ' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.646 --rc genhtml_branch_coverage=1 00:16:28.646 --rc genhtml_function_coverage=1 00:16:28.646 --rc genhtml_legend=1 00:16:28.646 --rc geninfo_all_blocks=1 00:16:28.646 --rc geninfo_unexecuted_blocks=1 00:16:28.646 00:16:28.646 ' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.646 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.646 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.647 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:36.786 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:36.786 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:36.786 Found net devices under 0000:af:00.0: cvl_0_0 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:36.786 Found net devices under 0000:af:00.1: cvl_0_1 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.786 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.786 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.786 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.786 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.786 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:16:36.787 00:16:36.787 --- 10.0.0.2 ping statistics --- 00:16:36.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.787 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:16:36.787 00:16:36.787 --- 10.0.0.1 ping statistics --- 00:16:36.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.787 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=315375 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 315375 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 315375 ']' 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.787 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:36.787 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.045 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:37.046 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:49.256 Initializing NVMe Controllers 00:16:49.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:49.256 Initialization complete. Launching workers. 00:16:49.256 ======================================================== 00:16:49.256 Latency(us) 00:16:49.256 Device Information : IOPS MiB/s Average min max 00:16:49.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18693.93 73.02 3425.19 570.93 22111.54 00:16:49.256 ======================================================== 00:16:49.256 Total : 18693.93 73.02 3425.19 570.93 22111.54 00:16:49.256 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.256 rmmod nvme_tcp 00:16:49.256 rmmod nvme_fabrics 00:16:49.256 rmmod nvme_keyring 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 315375 ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 315375 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 315375 ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 315375 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 315375 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 315375' 00:16:49.256 killing process with pid 315375 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 315375 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 315375 00:16:49.256 nvmf threads initialize successfully 00:16:49.256 bdev subsystem init successfully 00:16:49.256 created a nvmf target service 00:16:49.256 create targets's poll groups done 00:16:49.256 all subsystems of target started 00:16:49.256 nvmf target is running 00:16:49.256 all subsystems of target stopped 00:16:49.256 destroy targets's poll groups done 00:16:49.256 destroyed the nvmf target service 00:16:49.256 bdev subsystem finish successfully 00:16:49.256 nvmf threads destroy successfully 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.256 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.517 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:49.517 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:49.517 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.517 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:49.777 00:16:49.777 real 0m21.232s 00:16:49.777 user 0m46.111s 00:16:49.777 sys 0m7.749s 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:49.777 ************************************ 00:16:49.777 END TEST nvmf_example 00:16:49.777 ************************************ 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.777 ************************************ 00:16:49.777 START TEST nvmf_filesystem 00:16:49.777 ************************************ 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:49.777 * Looking for test storage... 00:16:49.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:49.777 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.044 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.045 --rc genhtml_branch_coverage=1 00:16:50.045 --rc genhtml_function_coverage=1 00:16:50.045 --rc genhtml_legend=1 00:16:50.045 --rc geninfo_all_blocks=1 00:16:50.045 --rc geninfo_unexecuted_blocks=1 00:16:50.045 00:16:50.045 ' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.045 --rc genhtml_branch_coverage=1 00:16:50.045 --rc genhtml_function_coverage=1 00:16:50.045 --rc genhtml_legend=1 00:16:50.045 --rc geninfo_all_blocks=1 00:16:50.045 --rc geninfo_unexecuted_blocks=1 00:16:50.045 00:16:50.045 ' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.045 --rc genhtml_branch_coverage=1 00:16:50.045 --rc genhtml_function_coverage=1 00:16:50.045 --rc genhtml_legend=1 00:16:50.045 --rc geninfo_all_blocks=1 00:16:50.045 --rc geninfo_unexecuted_blocks=1 00:16:50.045 00:16:50.045 ' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:50.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.045 --rc genhtml_branch_coverage=1 00:16:50.045 --rc genhtml_function_coverage=1 00:16:50.045 --rc genhtml_legend=1 00:16:50.045 --rc geninfo_all_blocks=1 00:16:50.045 --rc geninfo_unexecuted_blocks=1 00:16:50.045 00:16:50.045 ' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:50.045 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:50.046 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:50.046 #define SPDK_CONFIG_H 00:16:50.046 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:50.046 #define SPDK_CONFIG_APPS 1 00:16:50.046 #define SPDK_CONFIG_ARCH native 00:16:50.046 #undef SPDK_CONFIG_ASAN 00:16:50.047 #undef SPDK_CONFIG_AVAHI 00:16:50.047 #undef SPDK_CONFIG_CET 00:16:50.047 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:50.047 #define SPDK_CONFIG_COVERAGE 1 00:16:50.047 #define SPDK_CONFIG_CROSS_PREFIX 00:16:50.047 #undef SPDK_CONFIG_CRYPTO 00:16:50.047 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:50.047 #undef SPDK_CONFIG_CUSTOMOCF 00:16:50.047 #undef SPDK_CONFIG_DAOS 00:16:50.047 #define SPDK_CONFIG_DAOS_DIR 00:16:50.047 #define SPDK_CONFIG_DEBUG 1 00:16:50.047 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:50.047 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:50.047 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:50.047 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:50.047 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:50.047 #undef SPDK_CONFIG_DPDK_UADK 00:16:50.047 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:50.047 #define SPDK_CONFIG_EXAMPLES 1 00:16:50.047 #undef SPDK_CONFIG_FC 00:16:50.047 #define SPDK_CONFIG_FC_PATH 00:16:50.047 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:50.047 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:50.047 #define SPDK_CONFIG_FSDEV 1 00:16:50.047 #undef SPDK_CONFIG_FUSE 00:16:50.047 #undef SPDK_CONFIG_FUZZER 00:16:50.047 #define SPDK_CONFIG_FUZZER_LIB 00:16:50.047 #undef SPDK_CONFIG_GOLANG 00:16:50.047 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:50.047 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:50.047 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:50.047 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:50.047 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:50.047 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:50.047 #undef SPDK_CONFIG_HAVE_LZ4 00:16:50.047 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:50.047 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:50.047 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:50.047 #define SPDK_CONFIG_IDXD 1 00:16:50.047 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:50.047 #undef SPDK_CONFIG_IPSEC_MB 00:16:50.047 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:50.047 #define SPDK_CONFIG_ISAL 1 00:16:50.047 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:50.047 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:50.047 #define SPDK_CONFIG_LIBDIR 00:16:50.047 #undef SPDK_CONFIG_LTO 00:16:50.047 #define SPDK_CONFIG_MAX_LCORES 128 00:16:50.047 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:50.047 #define SPDK_CONFIG_NVME_CUSE 1 00:16:50.047 #undef SPDK_CONFIG_OCF 00:16:50.047 #define SPDK_CONFIG_OCF_PATH 00:16:50.047 #define SPDK_CONFIG_OPENSSL_PATH 00:16:50.047 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:50.047 #define SPDK_CONFIG_PGO_DIR 00:16:50.047 #undef SPDK_CONFIG_PGO_USE 00:16:50.047 #define SPDK_CONFIG_PREFIX /usr/local 00:16:50.047 #undef SPDK_CONFIG_RAID5F 00:16:50.047 #undef SPDK_CONFIG_RBD 00:16:50.047 #define SPDK_CONFIG_RDMA 1 00:16:50.047 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:50.047 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:50.047 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:50.047 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:50.047 #define SPDK_CONFIG_SHARED 1 00:16:50.047 #undef SPDK_CONFIG_SMA 00:16:50.047 #define SPDK_CONFIG_TESTS 1 00:16:50.047 #undef SPDK_CONFIG_TSAN 00:16:50.047 #define SPDK_CONFIG_UBLK 1 00:16:50.047 #define SPDK_CONFIG_UBSAN 1 00:16:50.047 #undef SPDK_CONFIG_UNIT_TESTS 00:16:50.047 #undef SPDK_CONFIG_URING 00:16:50.047 #define SPDK_CONFIG_URING_PATH 00:16:50.047 #undef SPDK_CONFIG_URING_ZNS 00:16:50.047 #undef SPDK_CONFIG_USDT 00:16:50.047 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:50.047 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:50.047 #define SPDK_CONFIG_VFIO_USER 1 00:16:50.047 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:50.047 #define SPDK_CONFIG_VHOST 1 00:16:50.047 #define SPDK_CONFIG_VIRTIO 1 00:16:50.047 #undef SPDK_CONFIG_VTUNE 00:16:50.047 #define SPDK_CONFIG_VTUNE_DIR 00:16:50.047 #define SPDK_CONFIG_WERROR 1 00:16:50.047 #define SPDK_CONFIG_WPDK_DIR 00:16:50.047 #undef SPDK_CONFIG_XNVME 00:16:50.047 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:50.047 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:50.048 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:50.049 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:50.050 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 317799 ]] 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 317799 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.LBrJHD 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LBrJHD/tests/target /tmp/spdk.LBrJHD 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=60326080512 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67015405568 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6689325056 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33497669632 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507700736 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13380046848 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13403082752 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23035904 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33507393536 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507704832 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6701527040 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6701539328 00:16:50.051 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:50.052 * Looking for test storage... 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=60326080512 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8903917568 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:50.052 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:50.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.314 --rc genhtml_branch_coverage=1 00:16:50.314 --rc genhtml_function_coverage=1 00:16:50.314 --rc genhtml_legend=1 00:16:50.314 --rc geninfo_all_blocks=1 00:16:50.314 --rc geninfo_unexecuted_blocks=1 00:16:50.314 00:16:50.314 ' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:50.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.314 --rc genhtml_branch_coverage=1 00:16:50.314 --rc genhtml_function_coverage=1 00:16:50.314 --rc genhtml_legend=1 00:16:50.314 --rc geninfo_all_blocks=1 00:16:50.314 --rc geninfo_unexecuted_blocks=1 00:16:50.314 00:16:50.314 ' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:50.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.314 --rc genhtml_branch_coverage=1 00:16:50.314 --rc genhtml_function_coverage=1 00:16:50.314 --rc genhtml_legend=1 00:16:50.314 --rc geninfo_all_blocks=1 00:16:50.314 --rc geninfo_unexecuted_blocks=1 00:16:50.314 00:16:50.314 ' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:50.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.314 --rc genhtml_branch_coverage=1 00:16:50.314 --rc genhtml_function_coverage=1 00:16:50.314 --rc genhtml_legend=1 00:16:50.314 --rc geninfo_all_blocks=1 00:16:50.314 --rc geninfo_unexecuted_blocks=1 00:16:50.314 00:16:50.314 ' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:50.314 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.315 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:58.448 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:58.448 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:58.448 Found net devices under 0000:af:00.0: cvl_0_0 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:58.448 Found net devices under 0000:af:00.1: cvl_0_1 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.448 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:58.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:16:58.449 00:16:58.449 --- 10.0.0.2 ping statistics --- 00:16:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.449 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:58.449 00:16:58.449 --- 10.0.0.1 ping statistics --- 00:16:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.449 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.449 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 ************************************ 00:16:58.449 START TEST nvmf_filesystem_no_in_capsule 00:16:58.449 ************************************ 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=321214 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 321214 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 321214 ']' 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.449 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.449 [2024-12-09 23:58:42.095501] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:16:58.449 [2024-12-09 23:58:42.095547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.449 [2024-12-09 23:58:42.193752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.449 [2024-12-09 23:58:42.234966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.449 [2024-12-09 23:58:42.235004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.449 [2024-12-09 23:58:42.235014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.449 [2024-12-09 23:58:42.235022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.449 [2024-12-09 23:58:42.235030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.449 [2024-12-09 23:58:42.236799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.449 [2024-12-09 23:58:42.236916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.449 [2024-12-09 23:58:42.236918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.449 [2024-12-09 23:58:42.236852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 [2024-12-09 23:58:42.977293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 Malloc1 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 [2024-12-09 23:58:43.133736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.708 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:58.709 { 00:16:58.709 "name": "Malloc1", 00:16:58.709 "aliases": [ 00:16:58.709 "1f608cd7-4995-4bd1-87e0-1fc35df08649" 00:16:58.709 ], 00:16:58.709 "product_name": "Malloc disk", 00:16:58.709 "block_size": 512, 00:16:58.709 "num_blocks": 1048576, 00:16:58.709 "uuid": "1f608cd7-4995-4bd1-87e0-1fc35df08649", 00:16:58.709 "assigned_rate_limits": { 00:16:58.709 "rw_ios_per_sec": 0, 00:16:58.709 "rw_mbytes_per_sec": 0, 00:16:58.709 "r_mbytes_per_sec": 0, 00:16:58.709 "w_mbytes_per_sec": 0 00:16:58.709 }, 00:16:58.709 "claimed": true, 00:16:58.709 "claim_type": "exclusive_write", 00:16:58.709 "zoned": false, 00:16:58.709 "supported_io_types": { 00:16:58.709 "read": true, 00:16:58.709 "write": true, 00:16:58.709 "unmap": true, 00:16:58.709 "flush": true, 00:16:58.709 "reset": true, 00:16:58.709 "nvme_admin": false, 00:16:58.709 "nvme_io": false, 00:16:58.709 "nvme_io_md": false, 00:16:58.709 "write_zeroes": true, 00:16:58.709 "zcopy": true, 00:16:58.709 "get_zone_info": false, 00:16:58.709 "zone_management": false, 00:16:58.709 "zone_append": false, 00:16:58.709 "compare": false, 00:16:58.709 "compare_and_write": false, 00:16:58.709 "abort": true, 00:16:58.709 "seek_hole": false, 00:16:58.709 "seek_data": false, 00:16:58.709 "copy": true, 00:16:58.709 "nvme_iov_md": false 00:16:58.709 }, 00:16:58.709 "memory_domains": [ 00:16:58.709 { 00:16:58.709 "dma_device_id": "system", 00:16:58.709 "dma_device_type": 1 00:16:58.709 }, 00:16:58.709 { 00:16:58.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.709 "dma_device_type": 2 00:16:58.709 } 00:16:58.709 ], 00:16:58.709 "driver_specific": {} 00:16:58.709 } 00:16:58.709 ]' 00:16:58.709 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:58.967 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.342 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.342 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:17:00.342 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.342 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:00.342 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:02.243 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:02.501 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:02.760 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.695 ************************************ 00:17:03.695 START TEST filesystem_ext4 00:17:03.695 ************************************ 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:17:03.695 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:03.953 mke2fs 1.47.0 (5-Feb-2023) 00:17:03.953 Discarding device blocks: 0/522240 done 00:17:03.953 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:03.953 Filesystem UUID: 33478ef9-2dd9-4dcc-b444-4447b8bcd3a6 00:17:03.953 Superblock backups stored on blocks: 00:17:03.953 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:03.953 00:17:03.953 Allocating group tables: 0/64 done 00:17:03.953 Writing inode tables: 0/64 done 00:17:06.484 Creating journal (8192 blocks): done 00:17:06.484 Writing superblocks and filesystem accounting information: 0/64 done 00:17:06.484 00:17:06.484 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:17:06.484 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 321214 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:11.754 00:17:11.754 real 0m8.007s 00:17:11.754 user 0m0.030s 00:17:11.754 sys 0m0.129s 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 ************************************ 00:17:11.754 END TEST filesystem_ext4 00:17:11.754 ************************************ 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 ************************************ 00:17:11.754 START TEST filesystem_btrfs 00:17:11.754 ************************************ 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:17:11.754 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:12.013 btrfs-progs v6.8.1 00:17:12.013 See https://btrfs.readthedocs.io for more information. 00:17:12.013 00:17:12.013 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:12.013 NOTE: several default settings have changed in version 5.15, please make sure 00:17:12.013 this does not affect your deployments: 00:17:12.013 - DUP for metadata (-m dup) 00:17:12.013 - enabled no-holes (-O no-holes) 00:17:12.013 - enabled free-space-tree (-R free-space-tree) 00:17:12.013 00:17:12.013 Label: (null) 00:17:12.013 UUID: 9ccdd72b-eeae-4a9a-94e2-2dc1fae1bd5b 00:17:12.013 Node size: 16384 00:17:12.013 Sector size: 4096 (CPU page size: 4096) 00:17:12.013 Filesystem size: 510.00MiB 00:17:12.013 Block group profiles: 00:17:12.013 Data: single 8.00MiB 00:17:12.013 Metadata: DUP 32.00MiB 00:17:12.013 System: DUP 8.00MiB 00:17:12.013 SSD detected: yes 00:17:12.013 Zoned device: no 00:17:12.013 Features: extref, skinny-metadata, no-holes, free-space-tree 00:17:12.013 Checksum: crc32c 00:17:12.013 Number of devices: 1 00:17:12.013 Devices: 00:17:12.013 ID SIZE PATH 00:17:12.013 1 510.00MiB /dev/nvme0n1p1 00:17:12.013 00:17:12.013 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:17:12.013 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 321214 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:12.949 00:17:12.949 real 0m1.193s 00:17:12.949 user 0m0.032s 00:17:12.949 sys 0m0.167s 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.949 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 ************************************ 00:17:12.949 END TEST filesystem_btrfs 00:17:12.949 ************************************ 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.208 ************************************ 00:17:13.208 START TEST filesystem_xfs 00:17:13.208 ************************************ 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:17:13.208 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:13.208 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:13.208 = sectsz=512 attr=2, projid32bit=1 00:17:13.208 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:13.208 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:13.208 data = bsize=4096 blocks=130560, imaxpct=25 00:17:13.208 = sunit=0 swidth=0 blks 00:17:13.208 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:13.208 log =internal log bsize=4096 blocks=16384, version=2 00:17:13.208 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:13.208 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:14.143 Discarding blocks...Done. 00:17:14.143 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:17:14.143 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 321214 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:17.450 00:17:17.450 real 0m3.999s 00:17:17.450 user 0m0.024s 00:17:17.450 sys 0m0.132s 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:17.450 ************************************ 00:17:17.450 END TEST filesystem_xfs 00:17:17.450 ************************************ 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:17.450 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 321214 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 321214 ']' 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 321214 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.709 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321214 00:17:17.709 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.709 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.709 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321214' 00:17:17.709 killing process with pid 321214 00:17:17.709 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 321214 00:17:17.709 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 321214 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:17.970 00:17:17.970 real 0m20.328s 00:17:17.970 user 1m19.892s 00:17:17.970 sys 0m2.120s 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 ************************************ 00:17:17.970 END TEST nvmf_filesystem_no_in_capsule 00:17:17.970 ************************************ 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.970 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 ************************************ 00:17:18.229 START TEST nvmf_filesystem_in_capsule 00:17:18.229 ************************************ 00:17:18.229 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:17:18.229 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:18.229 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=324853 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 324853 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 324853 ']' 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.230 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:18.230 [2024-12-09 23:59:02.513943] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:17:18.230 [2024-12-09 23:59:02.513990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.230 [2024-12-09 23:59:02.607643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.230 [2024-12-09 23:59:02.646282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.230 [2024-12-09 23:59:02.646321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.230 [2024-12-09 23:59:02.646330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.230 [2024-12-09 23:59:02.646338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.230 [2024-12-09 23:59:02.646346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.230 [2024-12-09 23:59:02.647976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.230 [2024-12-09 23:59:02.648113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.230 [2024-12-09 23:59:02.648222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.230 [2024-12-09 23:59:02.648223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 [2024-12-09 23:59:03.404194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 [2024-12-09 23:59:03.560002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:19.167 { 00:17:19.167 "name": "Malloc1", 00:17:19.167 "aliases": [ 00:17:19.167 "bf43505a-1756-4fc9-a9c9-899c8c8ab987" 00:17:19.167 ], 00:17:19.167 "product_name": "Malloc disk", 00:17:19.167 "block_size": 512, 00:17:19.167 "num_blocks": 1048576, 00:17:19.167 "uuid": "bf43505a-1756-4fc9-a9c9-899c8c8ab987", 00:17:19.167 "assigned_rate_limits": { 00:17:19.167 "rw_ios_per_sec": 0, 00:17:19.167 "rw_mbytes_per_sec": 0, 00:17:19.167 "r_mbytes_per_sec": 0, 00:17:19.167 "w_mbytes_per_sec": 0 00:17:19.167 }, 00:17:19.167 "claimed": true, 00:17:19.167 "claim_type": "exclusive_write", 00:17:19.167 "zoned": false, 00:17:19.167 "supported_io_types": { 00:17:19.167 "read": true, 00:17:19.167 "write": true, 00:17:19.167 "unmap": true, 00:17:19.167 "flush": true, 00:17:19.167 "reset": true, 00:17:19.167 "nvme_admin": false, 00:17:19.167 "nvme_io": false, 00:17:19.167 "nvme_io_md": false, 00:17:19.167 "write_zeroes": true, 00:17:19.167 "zcopy": true, 00:17:19.167 "get_zone_info": false, 00:17:19.167 "zone_management": false, 00:17:19.167 "zone_append": false, 00:17:19.167 "compare": false, 00:17:19.167 "compare_and_write": false, 00:17:19.167 "abort": true, 00:17:19.167 "seek_hole": false, 00:17:19.167 "seek_data": false, 00:17:19.167 "copy": true, 00:17:19.167 "nvme_iov_md": false 00:17:19.167 }, 00:17:19.167 "memory_domains": [ 00:17:19.167 { 00:17:19.167 "dma_device_id": "system", 00:17:19.167 "dma_device_type": 1 00:17:19.167 }, 00:17:19.167 { 00:17:19.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.167 "dma_device_type": 2 00:17:19.167 } 00:17:19.167 ], 00:17:19.167 "driver_specific": {} 00:17:19.167 } 00:17:19.167 ]' 00:17:19.167 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:19.426 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.800 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.800 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.800 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.800 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:20.800 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:22.700 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:22.700 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:22.700 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:22.958 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:23.217 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 ************************************ 00:17:24.153 START TEST filesystem_in_capsule_ext4 00:17:24.153 ************************************ 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:17:24.153 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:24.153 mke2fs 1.47.0 (5-Feb-2023) 00:17:24.411 Discarding device blocks: 0/522240 done 00:17:24.411 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:24.411 Filesystem UUID: f977167b-1516-4576-ab4d-55d84b5588ce 00:17:24.411 Superblock backups stored on blocks: 00:17:24.411 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:24.411 00:17:24.411 Allocating group tables: 0/64 done 00:17:24.411 Writing inode tables: 0/64 done 00:17:24.411 Creating journal (8192 blocks): done 00:17:24.411 Writing superblocks and filesystem accounting information: 0/64 done 00:17:24.411 00:17:24.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:17:24.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 324853 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:30.971 00:17:30.971 real 0m5.758s 00:17:30.971 user 0m0.038s 00:17:30.971 sys 0m0.073s 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:30.971 ************************************ 00:17:30.971 END TEST filesystem_in_capsule_ext4 00:17:30.971 ************************************ 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:30.971 ************************************ 00:17:30.971 START TEST filesystem_in_capsule_btrfs 00:17:30.971 ************************************ 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:30.971 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:30.972 btrfs-progs v6.8.1 00:17:30.972 See https://btrfs.readthedocs.io for more information. 00:17:30.972 00:17:30.972 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:30.972 NOTE: several default settings have changed in version 5.15, please make sure 00:17:30.972 this does not affect your deployments: 00:17:30.972 - DUP for metadata (-m dup) 00:17:30.972 - enabled no-holes (-O no-holes) 00:17:30.972 - enabled free-space-tree (-R free-space-tree) 00:17:30.972 00:17:30.972 Label: (null) 00:17:30.972 UUID: edb09f3a-eb9e-4ebb-b964-4fa1b7407a6d 00:17:30.972 Node size: 16384 00:17:30.972 Sector size: 4096 (CPU page size: 4096) 00:17:30.972 Filesystem size: 510.00MiB 00:17:30.972 Block group profiles: 00:17:30.972 Data: single 8.00MiB 00:17:30.972 Metadata: DUP 32.00MiB 00:17:30.972 System: DUP 8.00MiB 00:17:30.972 SSD detected: yes 00:17:30.972 Zoned device: no 00:17:30.972 Features: extref, skinny-metadata, no-holes, free-space-tree 00:17:30.972 Checksum: crc32c 00:17:30.972 Number of devices: 1 00:17:30.972 Devices: 00:17:30.972 ID SIZE PATH 00:17:30.972 1 510.00MiB /dev/nvme0n1p1 00:17:30.972 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:17:30.972 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 324853 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:30.972 00:17:30.972 real 0m0.866s 00:17:30.972 user 0m0.029s 00:17:30.972 sys 0m0.131s 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:30.972 ************************************ 00:17:30.972 END TEST filesystem_in_capsule_btrfs 00:17:30.972 ************************************ 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:30.972 ************************************ 00:17:30.972 START TEST filesystem_in_capsule_xfs 00:17:30.972 ************************************ 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:17:30.972 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:30.972 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:30.972 = sectsz=512 attr=2, projid32bit=1 00:17:30.972 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:30.972 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:30.972 data = bsize=4096 blocks=130560, imaxpct=25 00:17:30.972 = sunit=0 swidth=0 blks 00:17:30.972 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:30.972 log =internal log bsize=4096 blocks=16384, version=2 00:17:30.972 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:30.972 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:31.906 Discarding blocks...Done. 00:17:31.906 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:17:31.906 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:17:33.807 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 324853 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:33.807 00:17:33.807 real 0m2.676s 00:17:33.807 user 0m0.033s 00:17:33.807 sys 0m0.081s 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:33.807 ************************************ 00:17:33.807 END TEST filesystem_in_capsule_xfs 00:17:33.807 ************************************ 00:17:33.807 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 324853 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 324853 ']' 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 324853 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:17:34.066 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324853 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324853' 00:17:34.325 killing process with pid 324853 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 324853 00:17:34.325 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 324853 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:34.586 00:17:34.586 real 0m16.462s 00:17:34.586 user 1m4.633s 00:17:34.586 sys 0m1.840s 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:34.586 ************************************ 00:17:34.586 END TEST nvmf_filesystem_in_capsule 00:17:34.586 ************************************ 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.586 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.586 rmmod nvme_tcp 00:17:34.586 rmmod nvme_fabrics 00:17:34.586 rmmod nvme_keyring 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.586 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.587 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.132 00:17:37.132 real 0m47.032s 00:17:37.132 user 2m26.824s 00:17:37.132 sys 0m9.976s 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:37.132 ************************************ 00:17:37.132 END TEST nvmf_filesystem 00:17:37.132 ************************************ 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.132 ************************************ 00:17:37.132 START TEST nvmf_target_discovery 00:17:37.132 ************************************ 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:37.132 * Looking for test storage... 00:17:37.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.132 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.133 --rc genhtml_branch_coverage=1 00:17:37.133 --rc genhtml_function_coverage=1 00:17:37.133 --rc genhtml_legend=1 00:17:37.133 --rc geninfo_all_blocks=1 00:17:37.133 --rc geninfo_unexecuted_blocks=1 00:17:37.133 00:17:37.133 ' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.133 --rc genhtml_branch_coverage=1 00:17:37.133 --rc genhtml_function_coverage=1 00:17:37.133 --rc genhtml_legend=1 00:17:37.133 --rc geninfo_all_blocks=1 00:17:37.133 --rc geninfo_unexecuted_blocks=1 00:17:37.133 00:17:37.133 ' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.133 --rc genhtml_branch_coverage=1 00:17:37.133 --rc genhtml_function_coverage=1 00:17:37.133 --rc genhtml_legend=1 00:17:37.133 --rc geninfo_all_blocks=1 00:17:37.133 --rc geninfo_unexecuted_blocks=1 00:17:37.133 00:17:37.133 ' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.133 --rc genhtml_branch_coverage=1 00:17:37.133 --rc genhtml_function_coverage=1 00:17:37.133 --rc genhtml_legend=1 00:17:37.133 --rc geninfo_all_blocks=1 00:17:37.133 --rc geninfo_unexecuted_blocks=1 00:17:37.133 00:17:37.133 ' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.133 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.134 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:45.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:45.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:45.271 Found net devices under 0000:af:00.0: cvl_0_0 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:45.271 Found net devices under 0000:af:00.1: cvl_0_1 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.271 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:45.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:17:45.272 00:17:45.272 --- 10.0.0.2 ping statistics --- 00:17:45.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.272 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:17:45.272 00:17:45.272 --- 10.0.0.1 ping statistics --- 00:17:45.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.272 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=331559 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 331559 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 331559 ']' 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.272 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.272 [2024-12-09 23:59:28.868456] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:17:45.272 [2024-12-09 23:59:28.868506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.272 [2024-12-09 23:59:28.963498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.272 [2024-12-09 23:59:29.005402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.272 [2024-12-09 23:59:29.005437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.272 [2024-12-09 23:59:29.005447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.272 [2024-12-09 23:59:29.005455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.272 [2024-12-09 23:59:29.005462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.272 [2024-12-09 23:59:29.007218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.272 [2024-12-09 23:59:29.007326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.272 [2024-12-09 23:59:29.007433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.272 [2024-12-09 23:59:29.007434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.272 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.272 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:45.272 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.272 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.272 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 [2024-12-09 23:59:29.763488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 Null1 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 [2024-12-09 23:59:29.839968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 Null2 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.531 Null3 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:45.531 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 Null4 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.532 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:17:45.791 00:17:45.791 Discovery Log Number of Records 6, Generation counter 6 00:17:45.791 =====Discovery Log Entry 0====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: current discovery subsystem 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4420 00:17:45.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: explicit discovery connections, duplicate discovery information 00:17:45.791 sectype: none 00:17:45.791 =====Discovery Log Entry 1====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: nvme subsystem 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4420 00:17:45.791 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: none 00:17:45.791 sectype: none 00:17:45.791 =====Discovery Log Entry 2====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: nvme subsystem 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4420 00:17:45.791 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: none 00:17:45.791 sectype: none 00:17:45.791 =====Discovery Log Entry 3====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: nvme subsystem 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4420 00:17:45.791 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: none 00:17:45.791 sectype: none 00:17:45.791 =====Discovery Log Entry 4====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: nvme subsystem 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4420 00:17:45.791 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: none 00:17:45.791 sectype: none 00:17:45.791 =====Discovery Log Entry 5====== 00:17:45.791 trtype: tcp 00:17:45.791 adrfam: ipv4 00:17:45.791 subtype: discovery subsystem referral 00:17:45.791 treq: not required 00:17:45.791 portid: 0 00:17:45.791 trsvcid: 4430 00:17:45.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:45.791 traddr: 10.0.0.2 00:17:45.791 eflags: none 00:17:45.791 sectype: none 00:17:45.791 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:45.791 Perform nvmf subsystem discovery via RPC 00:17:45.791 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:45.791 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.791 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.791 [ 00:17:45.791 { 00:17:45.791 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:45.791 "subtype": "Discovery", 00:17:45.791 "listen_addresses": [ 00:17:45.791 { 00:17:45.791 "trtype": "TCP", 00:17:45.791 "adrfam": "IPv4", 00:17:45.791 "traddr": "10.0.0.2", 00:17:45.791 "trsvcid": "4420" 00:17:45.791 } 00:17:45.791 ], 00:17:45.791 "allow_any_host": true, 00:17:45.791 "hosts": [] 00:17:45.791 }, 00:17:45.791 { 00:17:45.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.791 "subtype": "NVMe", 00:17:45.791 "listen_addresses": [ 00:17:45.791 { 00:17:45.791 "trtype": "TCP", 00:17:45.791 "adrfam": "IPv4", 00:17:45.791 "traddr": "10.0.0.2", 00:17:45.791 "trsvcid": "4420" 00:17:45.791 } 00:17:45.791 ], 00:17:45.791 "allow_any_host": true, 00:17:45.791 "hosts": [], 00:17:45.791 "serial_number": "SPDK00000000000001", 00:17:45.791 "model_number": "SPDK bdev Controller", 00:17:45.791 "max_namespaces": 32, 00:17:45.791 "min_cntlid": 1, 00:17:45.791 "max_cntlid": 65519, 00:17:45.791 "namespaces": [ 00:17:45.791 { 00:17:45.792 "nsid": 1, 00:17:45.792 "bdev_name": "Null1", 00:17:45.792 "name": "Null1", 00:17:45.792 "nguid": "6A408CEA04324B7A94089D650767D3FC", 00:17:45.792 "uuid": "6a408cea-0432-4b7a-9408-9d650767d3fc" 00:17:45.792 } 00:17:45.792 ] 00:17:45.792 }, 00:17:45.792 { 00:17:45.792 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:45.792 "subtype": "NVMe", 00:17:45.792 "listen_addresses": [ 00:17:45.792 { 00:17:45.792 "trtype": "TCP", 00:17:45.792 "adrfam": "IPv4", 00:17:45.792 "traddr": "10.0.0.2", 00:17:45.792 "trsvcid": "4420" 00:17:45.792 } 00:17:45.792 ], 00:17:45.792 "allow_any_host": true, 00:17:45.792 "hosts": [], 00:17:45.792 "serial_number": "SPDK00000000000002", 00:17:45.792 "model_number": "SPDK bdev Controller", 00:17:45.792 "max_namespaces": 32, 00:17:45.792 "min_cntlid": 1, 00:17:45.792 "max_cntlid": 65519, 00:17:45.792 "namespaces": [ 00:17:45.792 { 00:17:45.792 "nsid": 1, 00:17:45.792 "bdev_name": "Null2", 00:17:45.792 "name": "Null2", 00:17:45.792 "nguid": "75D7386F680A4E3AA62B8911B8A2C0E3", 00:17:45.792 "uuid": "75d7386f-680a-4e3a-a62b-8911b8a2c0e3" 00:17:45.792 } 00:17:45.792 ] 00:17:45.792 }, 00:17:45.792 { 00:17:45.792 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:45.792 "subtype": "NVMe", 00:17:45.792 "listen_addresses": [ 00:17:45.792 { 00:17:45.792 "trtype": "TCP", 00:17:45.792 "adrfam": "IPv4", 00:17:45.792 "traddr": "10.0.0.2", 00:17:45.792 "trsvcid": "4420" 00:17:45.792 } 00:17:45.792 ], 00:17:45.792 "allow_any_host": true, 00:17:45.792 "hosts": [], 00:17:45.792 "serial_number": "SPDK00000000000003", 00:17:45.792 "model_number": "SPDK bdev Controller", 00:17:45.792 "max_namespaces": 32, 00:17:45.792 "min_cntlid": 1, 00:17:45.792 "max_cntlid": 65519, 00:17:45.792 "namespaces": [ 00:17:45.792 { 00:17:45.792 "nsid": 1, 00:17:45.792 "bdev_name": "Null3", 00:17:45.792 "name": "Null3", 00:17:45.792 "nguid": "A06B0B27A8574E92907EAB5AA0DDEB51", 00:17:45.792 "uuid": "a06b0b27-a857-4e92-907e-ab5aa0ddeb51" 00:17:45.792 } 00:17:45.792 ] 00:17:45.792 }, 00:17:45.792 { 00:17:45.792 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:45.792 "subtype": "NVMe", 00:17:45.792 "listen_addresses": [ 00:17:45.792 { 00:17:45.792 "trtype": "TCP", 00:17:45.792 "adrfam": "IPv4", 00:17:45.792 "traddr": "10.0.0.2", 00:17:45.792 "trsvcid": "4420" 00:17:45.792 } 00:17:45.792 ], 00:17:45.792 "allow_any_host": true, 00:17:45.792 "hosts": [], 00:17:45.792 "serial_number": "SPDK00000000000004", 00:17:45.792 "model_number": "SPDK bdev Controller", 00:17:45.792 "max_namespaces": 32, 00:17:45.792 "min_cntlid": 1, 00:17:45.792 "max_cntlid": 65519, 00:17:45.792 "namespaces": [ 00:17:45.792 { 00:17:45.792 "nsid": 1, 00:17:45.792 "bdev_name": "Null4", 00:17:45.792 "name": "Null4", 00:17:45.792 "nguid": "1F3499815C5D4CF69E863222FB684402", 00:17:45.792 "uuid": "1f349981-5c5d-4cf6-9e86-3222fb684402" 00:17:45.792 } 00:17:45.792 ] 00:17:45.792 } 00:17:45.792 ] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.792 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.792 rmmod nvme_tcp 00:17:46.051 rmmod nvme_fabrics 00:17:46.051 rmmod nvme_keyring 00:17:46.051 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 331559 ']' 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 331559 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 331559 ']' 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 331559 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331559 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331559' 00:17:46.052 killing process with pid 331559 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 331559 00:17:46.052 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 331559 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.312 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.223 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:48.223 00:17:48.223 real 0m11.432s 00:17:48.223 user 0m8.582s 00:17:48.223 sys 0m6.086s 00:17:48.223 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.223 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.223 ************************************ 00:17:48.223 END TEST nvmf_target_discovery 00:17:48.223 ************************************ 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.483 ************************************ 00:17:48.483 START TEST nvmf_referrals 00:17:48.483 ************************************ 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:48.483 * Looking for test storage... 00:17:48.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.483 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:48.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.483 --rc genhtml_branch_coverage=1 00:17:48.483 --rc genhtml_function_coverage=1 00:17:48.483 --rc genhtml_legend=1 00:17:48.484 --rc geninfo_all_blocks=1 00:17:48.484 --rc geninfo_unexecuted_blocks=1 00:17:48.484 00:17:48.484 ' 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.484 --rc genhtml_branch_coverage=1 00:17:48.484 --rc genhtml_function_coverage=1 00:17:48.484 --rc genhtml_legend=1 00:17:48.484 --rc geninfo_all_blocks=1 00:17:48.484 --rc geninfo_unexecuted_blocks=1 00:17:48.484 00:17:48.484 ' 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.484 --rc genhtml_branch_coverage=1 00:17:48.484 --rc genhtml_function_coverage=1 00:17:48.484 --rc genhtml_legend=1 00:17:48.484 --rc geninfo_all_blocks=1 00:17:48.484 --rc geninfo_unexecuted_blocks=1 00:17:48.484 00:17:48.484 ' 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.484 --rc genhtml_branch_coverage=1 00:17:48.484 --rc genhtml_function_coverage=1 00:17:48.484 --rc genhtml_legend=1 00:17:48.484 --rc geninfo_all_blocks=1 00:17:48.484 --rc geninfo_unexecuted_blocks=1 00:17:48.484 00:17:48.484 ' 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.484 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.745 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:17:48.746 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:56.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:56.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:56.882 Found net devices under 0000:af:00.0: cvl_0_0 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:56.882 Found net devices under 0000:af:00.1: cvl_0_1 00:17:56.882 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.883 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:56.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:17:56.883 00:17:56.883 --- 10.0.0.2 ping statistics --- 00:17:56.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.883 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:56.883 00:17:56.883 --- 10.0.0.1 ping statistics --- 00:17:56.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.883 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=335589 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 335589 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 335589 ']' 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.883 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 [2024-12-09 23:59:40.372551] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:17:56.883 [2024-12-09 23:59:40.372610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.883 [2024-12-09 23:59:40.476093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.883 [2024-12-09 23:59:40.520821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.883 [2024-12-09 23:59:40.520860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.883 [2024-12-09 23:59:40.520869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.883 [2024-12-09 23:59:40.520878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.883 [2024-12-09 23:59:40.520886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.883 [2024-12-09 23:59:40.522584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.883 [2024-12-09 23:59:40.522693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.883 [2024-12-09 23:59:40.522922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.883 [2024-12-09 23:59:40.522924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 [2024-12-09 23:59:41.263396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 [2024-12-09 23:59:41.286947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:56.883 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.401 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:57.660 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:57.660 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:57.919 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:58.178 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:58.437 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:58.696 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.955 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.955 rmmod nvme_tcp 00:17:58.955 rmmod nvme_fabrics 00:17:58.955 rmmod nvme_keyring 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 335589 ']' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 335589 ']' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335589' 00:17:59.214 killing process with pid 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 335589 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.214 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:01.755 00:18:01.755 real 0m13.017s 00:18:01.755 user 0m15.570s 00:18:01.755 sys 0m6.557s 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:18:01.755 ************************************ 00:18:01.755 END TEST nvmf_referrals 00:18:01.755 ************************************ 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.755 ************************************ 00:18:01.755 START TEST nvmf_connect_disconnect 00:18:01.755 ************************************ 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:01.755 * Looking for test storage... 00:18:01.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:18:01.755 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.755 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:01.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.756 --rc genhtml_branch_coverage=1 00:18:01.756 --rc genhtml_function_coverage=1 00:18:01.756 --rc genhtml_legend=1 00:18:01.756 --rc geninfo_all_blocks=1 00:18:01.756 --rc geninfo_unexecuted_blocks=1 00:18:01.756 00:18:01.756 ' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:01.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.756 --rc genhtml_branch_coverage=1 00:18:01.756 --rc genhtml_function_coverage=1 00:18:01.756 --rc genhtml_legend=1 00:18:01.756 --rc geninfo_all_blocks=1 00:18:01.756 --rc geninfo_unexecuted_blocks=1 00:18:01.756 00:18:01.756 ' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:01.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.756 --rc genhtml_branch_coverage=1 00:18:01.756 --rc genhtml_function_coverage=1 00:18:01.756 --rc genhtml_legend=1 00:18:01.756 --rc geninfo_all_blocks=1 00:18:01.756 --rc geninfo_unexecuted_blocks=1 00:18:01.756 00:18:01.756 ' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:01.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.756 --rc genhtml_branch_coverage=1 00:18:01.756 --rc genhtml_function_coverage=1 00:18:01.756 --rc genhtml_legend=1 00:18:01.756 --rc geninfo_all_blocks=1 00:18:01.756 --rc geninfo_unexecuted_blocks=1 00:18:01.756 00:18:01.756 ' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:18:01.756 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:09.898 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:09.898 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:09.898 Found net devices under 0000:af:00.0: cvl_0_0 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:09.898 Found net devices under 0000:af:00.1: cvl_0_1 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.898 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:18:09.899 00:18:09.899 --- 10.0.0.2 ping statistics --- 00:18:09.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.899 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:18:09.899 00:18:09.899 --- 10.0.0.1 ping statistics --- 00:18:09.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.899 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=340096 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 340096 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 340096 ']' 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.899 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:09.899 [2024-12-09 23:59:53.464977] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:18:09.899 [2024-12-09 23:59:53.465022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.899 [2024-12-09 23:59:53.559626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.899 [2024-12-09 23:59:53.600329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.899 [2024-12-09 23:59:53.600367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.899 [2024-12-09 23:59:53.600376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.899 [2024-12-09 23:59:53.600385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.899 [2024-12-09 23:59:53.600392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.899 [2024-12-09 23:59:53.602132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.899 [2024-12-09 23:59:53.602241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.899 [2024-12-09 23:59:53.602353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.899 [2024-12-09 23:59:53.602354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:09.899 [2024-12-09 23:59:54.346489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.899 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:10.158 [2024-12-09 23:59:54.411419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:18:10.158 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:18:13.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:27.472 rmmod nvme_tcp 00:18:27.472 rmmod nvme_fabrics 00:18:27.472 rmmod nvme_keyring 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 340096 ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 340096 ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340096' 00:18:27.472 killing process with pid 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 340096 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.472 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.023 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.023 00:18:30.023 real 0m28.155s 00:18:30.023 user 1m14.420s 00:18:30.023 sys 0m7.607s 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:30.023 ************************************ 00:18:30.023 END TEST nvmf_connect_disconnect 00:18:30.023 ************************************ 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.023 ************************************ 00:18:30.023 START TEST nvmf_multitarget 00:18:30.023 ************************************ 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:30.023 * Looking for test storage... 00:18:30.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:30.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.023 --rc genhtml_branch_coverage=1 00:18:30.023 --rc genhtml_function_coverage=1 00:18:30.023 --rc genhtml_legend=1 00:18:30.023 --rc geninfo_all_blocks=1 00:18:30.023 --rc geninfo_unexecuted_blocks=1 00:18:30.023 00:18:30.023 ' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:30.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.023 --rc genhtml_branch_coverage=1 00:18:30.023 --rc genhtml_function_coverage=1 00:18:30.023 --rc genhtml_legend=1 00:18:30.023 --rc geninfo_all_blocks=1 00:18:30.023 --rc geninfo_unexecuted_blocks=1 00:18:30.023 00:18:30.023 ' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:30.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.023 --rc genhtml_branch_coverage=1 00:18:30.023 --rc genhtml_function_coverage=1 00:18:30.023 --rc genhtml_legend=1 00:18:30.023 --rc geninfo_all_blocks=1 00:18:30.023 --rc geninfo_unexecuted_blocks=1 00:18:30.023 00:18:30.023 ' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:30.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.023 --rc genhtml_branch_coverage=1 00:18:30.023 --rc genhtml_function_coverage=1 00:18:30.023 --rc genhtml_legend=1 00:18:30.023 --rc geninfo_all_blocks=1 00:18:30.023 --rc geninfo_unexecuted_blocks=1 00:18:30.023 00:18:30.023 ' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.023 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:18:30.024 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:38.150 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:38.151 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:38.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:38.151 Found net devices under 0000:af:00.0: cvl_0_0 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:38.151 Found net devices under 0000:af:00.1: cvl_0_1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:38.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:18:38.151 00:18:38.151 --- 10.0.0.2 ping statistics --- 00:18:38.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.151 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:18:38.151 00:18:38.151 --- 10.0.0.1 ping statistics --- 00:18:38.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.151 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:38.151 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=347400 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 347400 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 347400 ']' 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.152 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:38.152 [2024-12-10 00:00:21.659912] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:18:38.152 [2024-12-10 00:00:21.659969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.152 [2024-12-10 00:00:21.757523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.152 [2024-12-10 00:00:21.798589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.152 [2024-12-10 00:00:21.798626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.152 [2024-12-10 00:00:21.798636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.152 [2024-12-10 00:00:21.798645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.152 [2024-12-10 00:00:21.798653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.152 [2024-12-10 00:00:21.800401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.152 [2024-12-10 00:00:21.800509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.152 [2024-12-10 00:00:21.800627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.152 [2024-12-10 00:00:21.800628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:38.152 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:38.409 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:38.409 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:38.409 "nvmf_tgt_1" 00:18:38.409 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:38.409 "nvmf_tgt_2" 00:18:38.409 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:38.409 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:38.665 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:38.665 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:38.665 true 00:18:38.665 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:38.923 true 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.923 rmmod nvme_tcp 00:18:38.923 rmmod nvme_fabrics 00:18:38.923 rmmod nvme_keyring 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 347400 ']' 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 347400 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 347400 ']' 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 347400 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.923 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347400 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347400' 00:18:39.183 killing process with pid 347400 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 347400 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 347400 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.183 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.728 00:18:41.728 real 0m11.591s 00:18:41.728 user 0m10.129s 00:18:41.728 sys 0m6.055s 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:41.728 ************************************ 00:18:41.728 END TEST nvmf_multitarget 00:18:41.728 ************************************ 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.728 ************************************ 00:18:41.728 START TEST nvmf_rpc 00:18:41.728 ************************************ 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:41.728 * Looking for test storage... 00:18:41.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:41.728 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.729 --rc genhtml_branch_coverage=1 00:18:41.729 --rc genhtml_function_coverage=1 00:18:41.729 --rc genhtml_legend=1 00:18:41.729 --rc geninfo_all_blocks=1 00:18:41.729 --rc geninfo_unexecuted_blocks=1 00:18:41.729 00:18:41.729 ' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.729 --rc genhtml_branch_coverage=1 00:18:41.729 --rc genhtml_function_coverage=1 00:18:41.729 --rc genhtml_legend=1 00:18:41.729 --rc geninfo_all_blocks=1 00:18:41.729 --rc geninfo_unexecuted_blocks=1 00:18:41.729 00:18:41.729 ' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.729 --rc genhtml_branch_coverage=1 00:18:41.729 --rc genhtml_function_coverage=1 00:18:41.729 --rc genhtml_legend=1 00:18:41.729 --rc geninfo_all_blocks=1 00:18:41.729 --rc geninfo_unexecuted_blocks=1 00:18:41.729 00:18:41.729 ' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.729 --rc genhtml_branch_coverage=1 00:18:41.729 --rc genhtml_function_coverage=1 00:18:41.729 --rc genhtml_legend=1 00:18:41.729 --rc geninfo_all_blocks=1 00:18:41.729 --rc geninfo_unexecuted_blocks=1 00:18:41.729 00:18:41.729 ' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.729 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.730 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.730 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:41.730 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:41.730 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.730 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.857 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:49.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:49.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:49.857 Found net devices under 0000:af:00.0: cvl_0_0 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:49.857 Found net devices under 0000:af:00.1: cvl_0_1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:18:49.857 00:18:49.857 --- 10.0.0.2 ping statistics --- 00:18:49.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.857 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:49.857 00:18:49.857 --- 10.0.0.1 ping statistics --- 00:18:49.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.857 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.857 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=351585 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 351585 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 351585 ']' 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.858 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 [2024-12-10 00:00:33.414040] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:18:49.858 [2024-12-10 00:00:33.414093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.858 [2024-12-10 00:00:33.512803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.858 [2024-12-10 00:00:33.554653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.858 [2024-12-10 00:00:33.554689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.858 [2024-12-10 00:00:33.554699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.858 [2024-12-10 00:00:33.554710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.858 [2024-12-10 00:00:33.554718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.858 [2024-12-10 00:00:33.556309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.858 [2024-12-10 00:00:33.556420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.858 [2024-12-10 00:00:33.556527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.858 [2024-12-10 00:00:33.556528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:49.858 "tick_rate": 2500000000, 00:18:49.858 "poll_groups": [ 00:18:49.858 { 00:18:49.858 "name": "nvmf_tgt_poll_group_000", 00:18:49.858 "admin_qpairs": 0, 00:18:49.858 "io_qpairs": 0, 00:18:49.858 "current_admin_qpairs": 0, 00:18:49.858 "current_io_qpairs": 0, 00:18:49.858 "pending_bdev_io": 0, 00:18:49.858 "completed_nvme_io": 0, 00:18:49.858 "transports": [] 00:18:49.858 }, 00:18:49.858 { 00:18:49.858 "name": "nvmf_tgt_poll_group_001", 00:18:49.858 "admin_qpairs": 0, 00:18:49.858 "io_qpairs": 0, 00:18:49.858 "current_admin_qpairs": 0, 00:18:49.858 "current_io_qpairs": 0, 00:18:49.858 "pending_bdev_io": 0, 00:18:49.858 "completed_nvme_io": 0, 00:18:49.858 "transports": [] 00:18:49.858 }, 00:18:49.858 { 00:18:49.858 "name": "nvmf_tgt_poll_group_002", 00:18:49.858 "admin_qpairs": 0, 00:18:49.858 "io_qpairs": 0, 00:18:49.858 "current_admin_qpairs": 0, 00:18:49.858 "current_io_qpairs": 0, 00:18:49.858 "pending_bdev_io": 0, 00:18:49.858 "completed_nvme_io": 0, 00:18:49.858 "transports": [] 00:18:49.858 }, 00:18:49.858 { 00:18:49.858 "name": "nvmf_tgt_poll_group_003", 00:18:49.858 "admin_qpairs": 0, 00:18:49.858 "io_qpairs": 0, 00:18:49.858 "current_admin_qpairs": 0, 00:18:49.858 "current_io_qpairs": 0, 00:18:49.858 "pending_bdev_io": 0, 00:18:49.858 "completed_nvme_io": 0, 00:18:49.858 "transports": [] 00:18:49.858 } 00:18:49.858 ] 00:18:49.858 }' 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:49.858 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 [2024-12-10 00:00:34.420527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:50.117 "tick_rate": 2500000000, 00:18:50.117 "poll_groups": [ 00:18:50.117 { 00:18:50.117 "name": "nvmf_tgt_poll_group_000", 00:18:50.117 "admin_qpairs": 0, 00:18:50.117 "io_qpairs": 0, 00:18:50.117 "current_admin_qpairs": 0, 00:18:50.117 "current_io_qpairs": 0, 00:18:50.117 "pending_bdev_io": 0, 00:18:50.117 "completed_nvme_io": 0, 00:18:50.117 "transports": [ 00:18:50.117 { 00:18:50.117 "trtype": "TCP" 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 }, 00:18:50.117 { 00:18:50.117 "name": "nvmf_tgt_poll_group_001", 00:18:50.117 "admin_qpairs": 0, 00:18:50.117 "io_qpairs": 0, 00:18:50.117 "current_admin_qpairs": 0, 00:18:50.117 "current_io_qpairs": 0, 00:18:50.117 "pending_bdev_io": 0, 00:18:50.117 "completed_nvme_io": 0, 00:18:50.117 "transports": [ 00:18:50.117 { 00:18:50.117 "trtype": "TCP" 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 }, 00:18:50.117 { 00:18:50.117 "name": "nvmf_tgt_poll_group_002", 00:18:50.117 "admin_qpairs": 0, 00:18:50.117 "io_qpairs": 0, 00:18:50.117 "current_admin_qpairs": 0, 00:18:50.117 "current_io_qpairs": 0, 00:18:50.117 "pending_bdev_io": 0, 00:18:50.117 "completed_nvme_io": 0, 00:18:50.117 "transports": [ 00:18:50.117 { 00:18:50.117 "trtype": "TCP" 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 }, 00:18:50.117 { 00:18:50.117 "name": "nvmf_tgt_poll_group_003", 00:18:50.117 "admin_qpairs": 0, 00:18:50.117 "io_qpairs": 0, 00:18:50.117 "current_admin_qpairs": 0, 00:18:50.117 "current_io_qpairs": 0, 00:18:50.117 "pending_bdev_io": 0, 00:18:50.117 "completed_nvme_io": 0, 00:18:50.117 "transports": [ 00:18:50.117 { 00:18:50.117 "trtype": "TCP" 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 } 00:18:50.117 ] 00:18:50.117 }' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 Malloc1 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:50.117 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.375 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.375 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.375 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.375 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.376 [2024-12-10 00:00:34.600923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:18:50.376 [2024-12-10 00:00:34.635720] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:18:50.376 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:50.376 could not add new controller: failed to write to nvme-fabrics device 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.376 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.793 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.793 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:51.793 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.793 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:51.793 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:53.696 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:53.697 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:53.956 [2024-12-10 00:00:38.181499] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:18:53.956 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:53.956 could not add new controller: failed to write to nvme-fabrics device 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.956 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:55.335 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:55.335 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:55.335 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.335 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:55.335 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:57.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.232 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:57.490 [2024-12-10 00:00:41.728302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.490 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.862 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.862 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:58.862 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.862 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:58.862 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:00.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.766 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.767 [2024-12-10 00:00:45.228659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.767 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.023 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.400 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.400 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.400 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.400 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:02.400 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:04.304 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 [2024-12-10 00:00:48.861713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.563 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.945 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:05.945 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.945 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.945 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:05.945 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:07.845 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.108 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.109 [2024-12-10 00:00:52.387969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.109 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:09.492 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:09.492 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.492 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.492 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:09.492 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.389 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.389 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.390 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.647 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 [2024-12-10 00:00:55.927361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.648 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:13.021 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:13.021 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:13.021 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.021 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:13.021 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:14.919 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:14.919 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:14.919 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:14.920 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:14.920 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.920 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:14.920 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.920 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.176 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:15.176 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 [2024-12-10 00:00:59.474827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 [2024-12-10 00:00:59.522950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 [2024-12-10 00:00:59.571070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 [2024-12-10 00:00:59.619244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.177 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 [2024-12-10 00:00:59.667422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:15.435 "tick_rate": 2500000000, 00:19:15.435 "poll_groups": [ 00:19:15.435 { 00:19:15.435 "name": "nvmf_tgt_poll_group_000", 00:19:15.435 "admin_qpairs": 2, 00:19:15.435 "io_qpairs": 196, 00:19:15.435 "current_admin_qpairs": 0, 00:19:15.435 "current_io_qpairs": 0, 00:19:15.435 "pending_bdev_io": 0, 00:19:15.435 "completed_nvme_io": 246, 00:19:15.435 "transports": [ 00:19:15.435 { 00:19:15.435 "trtype": "TCP" 00:19:15.435 } 00:19:15.435 ] 00:19:15.435 }, 00:19:15.435 { 00:19:15.435 "name": "nvmf_tgt_poll_group_001", 00:19:15.435 "admin_qpairs": 2, 00:19:15.435 "io_qpairs": 196, 00:19:15.435 "current_admin_qpairs": 0, 00:19:15.435 "current_io_qpairs": 0, 00:19:15.435 "pending_bdev_io": 0, 00:19:15.435 "completed_nvme_io": 297, 00:19:15.435 "transports": [ 00:19:15.435 { 00:19:15.435 "trtype": "TCP" 00:19:15.435 } 00:19:15.435 ] 00:19:15.435 }, 00:19:15.435 { 00:19:15.435 "name": "nvmf_tgt_poll_group_002", 00:19:15.435 "admin_qpairs": 1, 00:19:15.435 "io_qpairs": 196, 00:19:15.435 "current_admin_qpairs": 0, 00:19:15.435 "current_io_qpairs": 0, 00:19:15.435 "pending_bdev_io": 0, 00:19:15.435 "completed_nvme_io": 345, 00:19:15.435 "transports": [ 00:19:15.435 { 00:19:15.435 "trtype": "TCP" 00:19:15.435 } 00:19:15.435 ] 00:19:15.435 }, 00:19:15.435 { 00:19:15.435 "name": "nvmf_tgt_poll_group_003", 00:19:15.435 "admin_qpairs": 2, 00:19:15.435 "io_qpairs": 196, 00:19:15.435 "current_admin_qpairs": 0, 00:19:15.435 "current_io_qpairs": 0, 00:19:15.435 "pending_bdev_io": 0, 00:19:15.435 "completed_nvme_io": 246, 00:19:15.435 "transports": [ 00:19:15.435 { 00:19:15.435 "trtype": "TCP" 00:19:15.435 } 00:19:15.435 ] 00:19:15.435 } 00:19:15.435 ] 00:19:15.435 }' 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:15.435 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.436 rmmod nvme_tcp 00:19:15.436 rmmod nvme_fabrics 00:19:15.436 rmmod nvme_keyring 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 351585 ']' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 351585 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 351585 ']' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 351585 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.436 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351585 00:19:15.695 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.695 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.695 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351585' 00:19:15.695 killing process with pid 351585 00:19:15.695 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 351585 00:19:15.695 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 351585 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.695 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:18.236 00:19:18.236 real 0m36.474s 00:19:18.236 user 1m47.543s 00:19:18.236 sys 0m8.557s 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.236 ************************************ 00:19:18.236 END TEST nvmf_rpc 00:19:18.236 ************************************ 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.236 ************************************ 00:19:18.236 START TEST nvmf_invalid 00:19:18.236 ************************************ 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:18.236 * Looking for test storage... 00:19:18.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.236 --rc genhtml_branch_coverage=1 00:19:18.236 --rc genhtml_function_coverage=1 00:19:18.236 --rc genhtml_legend=1 00:19:18.236 --rc geninfo_all_blocks=1 00:19:18.236 --rc geninfo_unexecuted_blocks=1 00:19:18.236 00:19:18.236 ' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.236 --rc genhtml_branch_coverage=1 00:19:18.236 --rc genhtml_function_coverage=1 00:19:18.236 --rc genhtml_legend=1 00:19:18.236 --rc geninfo_all_blocks=1 00:19:18.236 --rc geninfo_unexecuted_blocks=1 00:19:18.236 00:19:18.236 ' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.236 --rc genhtml_branch_coverage=1 00:19:18.236 --rc genhtml_function_coverage=1 00:19:18.236 --rc genhtml_legend=1 00:19:18.236 --rc geninfo_all_blocks=1 00:19:18.236 --rc geninfo_unexecuted_blocks=1 00:19:18.236 00:19:18.236 ' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.236 --rc genhtml_branch_coverage=1 00:19:18.236 --rc genhtml_function_coverage=1 00:19:18.236 --rc genhtml_legend=1 00:19:18.236 --rc geninfo_all_blocks=1 00:19:18.236 --rc geninfo_unexecuted_blocks=1 00:19:18.236 00:19:18.236 ' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.236 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.237 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.377 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:26.378 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:26.378 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:26.378 Found net devices under 0000:af:00.0: cvl_0_0 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:26.378 Found net devices under 0000:af:00.1: cvl_0_1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:19:26.378 00:19:26.378 --- 10.0.0.2 ping statistics --- 00:19:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.378 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:19:26.378 00:19:26.378 --- 10.0.0.1 ping statistics --- 00:19:26.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.378 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=359933 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 359933 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 359933 ']' 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.378 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.378 [2024-12-10 00:01:09.895915] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:19:26.378 [2024-12-10 00:01:09.895967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.378 [2024-12-10 00:01:09.992269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.378 [2024-12-10 00:01:10.039785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.378 [2024-12-10 00:01:10.039827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.378 [2024-12-10 00:01:10.039838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.379 [2024-12-10 00:01:10.039846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.379 [2024-12-10 00:01:10.039853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.379 [2024-12-10 00:01:10.041397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.379 [2024-12-10 00:01:10.041510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.379 [2024-12-10 00:01:10.041596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.379 [2024-12-10 00:01:10.041594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:26.379 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24894 00:19:26.636 [2024-12-10 00:01:10.979207] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:26.636 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:26.636 { 00:19:26.636 "nqn": "nqn.2016-06.io.spdk:cnode24894", 00:19:26.636 "tgt_name": "foobar", 00:19:26.636 "method": "nvmf_create_subsystem", 00:19:26.636 "req_id": 1 00:19:26.636 } 00:19:26.636 Got JSON-RPC error response 00:19:26.636 response: 00:19:26.636 { 00:19:26.636 "code": -32603, 00:19:26.636 "message": "Unable to find target foobar" 00:19:26.636 }' 00:19:26.636 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:26.636 { 00:19:26.636 "nqn": "nqn.2016-06.io.spdk:cnode24894", 00:19:26.636 "tgt_name": "foobar", 00:19:26.636 "method": "nvmf_create_subsystem", 00:19:26.636 "req_id": 1 00:19:26.636 } 00:19:26.636 Got JSON-RPC error response 00:19:26.636 response: 00:19:26.636 { 00:19:26.636 "code": -32603, 00:19:26.636 "message": "Unable to find target foobar" 00:19:26.636 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:26.636 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:26.636 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13992 00:19:26.895 [2024-12-10 00:01:11.191968] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13992: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:26.895 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:26.895 { 00:19:26.895 "nqn": "nqn.2016-06.io.spdk:cnode13992", 00:19:26.895 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:26.895 "method": "nvmf_create_subsystem", 00:19:26.895 "req_id": 1 00:19:26.895 } 00:19:26.895 Got JSON-RPC error response 00:19:26.895 response: 00:19:26.895 { 00:19:26.895 "code": -32602, 00:19:26.895 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:26.895 }' 00:19:26.895 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:26.895 { 00:19:26.895 "nqn": "nqn.2016-06.io.spdk:cnode13992", 00:19:26.895 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:26.895 "method": "nvmf_create_subsystem", 00:19:26.895 "req_id": 1 00:19:26.895 } 00:19:26.895 Got JSON-RPC error response 00:19:26.895 response: 00:19:26.895 { 00:19:26.895 "code": -32602, 00:19:26.895 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:26.895 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:26.895 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:26.895 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16186 00:19:27.154 [2024-12-10 00:01:11.396588] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16186: invalid model number 'SPDK_Controller' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:27.154 { 00:19:27.154 "nqn": "nqn.2016-06.io.spdk:cnode16186", 00:19:27.154 "model_number": "SPDK_Controller\u001f", 00:19:27.154 "method": "nvmf_create_subsystem", 00:19:27.154 "req_id": 1 00:19:27.154 } 00:19:27.154 Got JSON-RPC error response 00:19:27.154 response: 00:19:27.154 { 00:19:27.154 "code": -32602, 00:19:27.154 "message": "Invalid MN SPDK_Controller\u001f" 00:19:27.154 }' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:27.154 { 00:19:27.154 "nqn": "nqn.2016-06.io.spdk:cnode16186", 00:19:27.154 "model_number": "SPDK_Controller\u001f", 00:19:27.154 "method": "nvmf_create_subsystem", 00:19:27.154 "req_id": 1 00:19:27.154 } 00:19:27.154 Got JSON-RPC error response 00:19:27.154 response: 00:19:27.154 { 00:19:27.154 "code": -32602, 00:19:27.154 "message": "Invalid MN SPDK_Controller\u001f" 00:19:27.154 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.154 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Og[cUHvN`t}2DB]=0*.i'\''' 00:19:27.155 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Og[cUHvN`t}2DB]=0*.i'\''' nqn.2016-06.io.spdk:cnode22133 00:19:27.413 [2024-12-10 00:01:11.769783] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22133: invalid serial number 'Og[cUHvN`t}2DB]=0*.i'' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:27.413 { 00:19:27.413 "nqn": "nqn.2016-06.io.spdk:cnode22133", 00:19:27.413 "serial_number": "Og[cUHvN`t}2DB]=0*.i'\''", 00:19:27.413 "method": "nvmf_create_subsystem", 00:19:27.413 "req_id": 1 00:19:27.413 } 00:19:27.413 Got JSON-RPC error response 00:19:27.413 response: 00:19:27.413 { 00:19:27.413 "code": -32602, 00:19:27.413 "message": "Invalid SN Og[cUHvN`t}2DB]=0*.i'\''" 00:19:27.413 }' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:27.413 { 00:19:27.413 "nqn": "nqn.2016-06.io.spdk:cnode22133", 00:19:27.413 "serial_number": "Og[cUHvN`t}2DB]=0*.i'", 00:19:27.413 "method": "nvmf_create_subsystem", 00:19:27.413 "req_id": 1 00:19:27.413 } 00:19:27.413 Got JSON-RPC error response 00:19:27.413 response: 00:19:27.413 { 00:19:27.413 "code": -32602, 00:19:27.413 "message": "Invalid SN Og[cUHvN`t}2DB]=0*.i'" 00:19:27.413 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.413 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.414 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:19:27.673 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:19:27.673 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:19:27.674 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^u0U6Cp /dev/null' 00:19:30.009 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.550 00:19:32.550 real 0m14.098s 00:19:32.550 user 0m21.894s 00:19:32.550 sys 0m6.616s 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:32.550 ************************************ 00:19:32.550 END TEST nvmf_invalid 00:19:32.550 ************************************ 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.550 ************************************ 00:19:32.550 START TEST nvmf_connect_stress 00:19:32.550 ************************************ 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:32.550 * Looking for test storage... 00:19:32.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.550 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:32.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.551 --rc genhtml_branch_coverage=1 00:19:32.551 --rc genhtml_function_coverage=1 00:19:32.551 --rc genhtml_legend=1 00:19:32.551 --rc geninfo_all_blocks=1 00:19:32.551 --rc geninfo_unexecuted_blocks=1 00:19:32.551 00:19:32.551 ' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:32.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.551 --rc genhtml_branch_coverage=1 00:19:32.551 --rc genhtml_function_coverage=1 00:19:32.551 --rc genhtml_legend=1 00:19:32.551 --rc geninfo_all_blocks=1 00:19:32.551 --rc geninfo_unexecuted_blocks=1 00:19:32.551 00:19:32.551 ' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:32.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.551 --rc genhtml_branch_coverage=1 00:19:32.551 --rc genhtml_function_coverage=1 00:19:32.551 --rc genhtml_legend=1 00:19:32.551 --rc geninfo_all_blocks=1 00:19:32.551 --rc geninfo_unexecuted_blocks=1 00:19:32.551 00:19:32.551 ' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:32.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.551 --rc genhtml_branch_coverage=1 00:19:32.551 --rc genhtml_function_coverage=1 00:19:32.551 --rc genhtml_legend=1 00:19:32.551 --rc geninfo_all_blocks=1 00:19:32.551 --rc geninfo_unexecuted_blocks=1 00:19:32.551 00:19:32.551 ' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.551 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.552 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:40.726 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:40.726 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:40.726 Found net devices under 0000:af:00.0: cvl_0_0 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:40.726 Found net devices under 0000:af:00.1: cvl_0_1 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.726 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.727 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:19:40.727 00:19:40.727 --- 10.0.0.2 ping statistics --- 00:19:40.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.727 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:40.727 00:19:40.727 --- 10.0.0.1 ping statistics --- 00:19:40.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.727 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=364581 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 364581 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 364581 ']' 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 [2024-12-10 00:01:24.120630] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:19:40.727 [2024-12-10 00:01:24.120685] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.727 [2024-12-10 00:01:24.216362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.727 [2024-12-10 00:01:24.254529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.727 [2024-12-10 00:01:24.254569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.727 [2024-12-10 00:01:24.254578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.727 [2024-12-10 00:01:24.254586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.727 [2024-12-10 00:01:24.254609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.727 [2024-12-10 00:01:24.256114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.727 [2024-12-10 00:01:24.256231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.727 [2024-12-10 00:01:24.256232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.727 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 [2024-12-10 00:01:25.002550] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 [2024-12-10 00:01:25.023055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.727 NULL1 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=364636 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.727 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.728 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.294 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.294 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:41.294 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.294 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.294 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:41.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.810 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.810 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:41.810 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.810 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.810 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.068 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.068 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:42.068 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.068 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.068 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.327 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.327 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:42.327 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.327 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.327 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.893 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.893 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:42.893 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.893 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.893 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.152 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.152 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:43.152 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.152 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.152 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.409 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.409 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:43.409 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.409 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.410 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.668 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.668 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:43.668 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.668 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.668 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.924 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.924 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:43.924 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.925 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.490 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.490 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:44.490 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.490 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.490 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.746 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.746 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:44.746 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.746 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.746 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:45.003 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.003 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:45.003 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.003 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.003 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:45.261 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.261 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:45.261 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.261 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.261 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:45.827 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.827 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:45.827 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.827 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.827 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:46.085 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.085 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:46.085 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.085 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.085 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:46.342 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.342 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:46.342 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.342 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.342 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:46.599 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.599 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:46.599 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.599 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.599 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:46.857 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.857 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:46.857 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.857 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.857 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.423 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:47.423 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:47.423 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.680 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.680 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:47.680 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:47.680 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.680 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.937 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.937 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:47.937 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:47.937 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.938 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.199 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.199 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:48.199 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.199 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.199 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.776 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.776 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:48.776 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.776 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.776 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.034 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.034 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:49.034 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.034 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.034 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.293 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.293 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:49.293 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.293 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.293 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.551 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.551 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:49.551 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.551 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.551 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.809 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.809 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:49.809 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.809 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.809 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.375 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.375 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:50.375 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.375 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.375 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.632 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.632 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:50.632 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.632 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.632 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.889 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 364636 00:19:50.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (364636) - No such process 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 364636 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.889 rmmod nvme_tcp 00:19:50.889 rmmod nvme_fabrics 00:19:50.889 rmmod nvme_keyring 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 364581 ']' 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 364581 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 364581 ']' 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 364581 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.889 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 364581 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 364581' 00:19:51.147 killing process with pid 364581 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 364581 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 364581 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.147 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:53.685 00:19:53.685 real 0m21.148s 00:19:53.685 user 0m42.808s 00:19:53.685 sys 0m9.008s 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:53.685 ************************************ 00:19:53.685 END TEST nvmf_connect_stress 00:19:53.685 ************************************ 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.685 ************************************ 00:19:53.685 START TEST nvmf_fused_ordering 00:19:53.685 ************************************ 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:53.685 * Looking for test storage... 00:19:53.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.685 --rc genhtml_branch_coverage=1 00:19:53.685 --rc genhtml_function_coverage=1 00:19:53.685 --rc genhtml_legend=1 00:19:53.685 --rc geninfo_all_blocks=1 00:19:53.685 --rc geninfo_unexecuted_blocks=1 00:19:53.685 00:19:53.685 ' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.685 --rc genhtml_branch_coverage=1 00:19:53.685 --rc genhtml_function_coverage=1 00:19:53.685 --rc genhtml_legend=1 00:19:53.685 --rc geninfo_all_blocks=1 00:19:53.685 --rc geninfo_unexecuted_blocks=1 00:19:53.685 00:19:53.685 ' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.685 --rc genhtml_branch_coverage=1 00:19:53.685 --rc genhtml_function_coverage=1 00:19:53.685 --rc genhtml_legend=1 00:19:53.685 --rc geninfo_all_blocks=1 00:19:53.685 --rc geninfo_unexecuted_blocks=1 00:19:53.685 00:19:53.685 ' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.685 --rc genhtml_branch_coverage=1 00:19:53.685 --rc genhtml_function_coverage=1 00:19:53.685 --rc genhtml_legend=1 00:19:53.685 --rc geninfo_all_blocks=1 00:19:53.685 --rc geninfo_unexecuted_blocks=1 00:19:53.685 00:19:53.685 ' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.685 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.686 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.815 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:01.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:01.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:01.816 Found net devices under 0000:af:00.0: cvl_0_0 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.816 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:01.816 Found net devices under 0000:af:00.1: cvl_0_1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:20:01.816 00:20:01.816 --- 10.0.0.2 ping statistics --- 00:20:01.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.816 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:20:01.816 00:20:01.816 --- 10.0.0.1 ping statistics --- 00:20:01.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.816 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=370162 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 370162 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 370162 ']' 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.816 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 [2024-12-10 00:01:45.398889] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:01.816 [2024-12-10 00:01:45.398938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.816 [2024-12-10 00:01:45.494175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.817 [2024-12-10 00:01:45.534357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.817 [2024-12-10 00:01:45.534392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.817 [2024-12-10 00:01:45.534402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.817 [2024-12-10 00:01:45.534411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.817 [2024-12-10 00:01:45.534418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.817 [2024-12-10 00:01:45.535020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:01.817 [2024-12-10 00:01:46.274170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.817 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:02.075 [2024-12-10 00:01:46.294359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:02.075 NULL1 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.075 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.075 [2024-12-10 00:01:46.354870] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:02.076 [2024-12-10 00:01:46.354907] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370436 ] 00:20:02.334 Attached to nqn.2016-06.io.spdk:cnode1 00:20:02.334 Namespace ID: 1 size: 1GB 00:20:02.334 fused_ordering(0) 00:20:02.334 fused_ordering(1) 00:20:02.334 fused_ordering(2) 00:20:02.334 fused_ordering(3) 00:20:02.334 fused_ordering(4) 00:20:02.334 fused_ordering(5) 00:20:02.334 fused_ordering(6) 00:20:02.334 fused_ordering(7) 00:20:02.334 fused_ordering(8) 00:20:02.334 fused_ordering(9) 00:20:02.334 fused_ordering(10) 00:20:02.334 fused_ordering(11) 00:20:02.334 fused_ordering(12) 00:20:02.334 fused_ordering(13) 00:20:02.334 fused_ordering(14) 00:20:02.334 fused_ordering(15) 00:20:02.334 fused_ordering(16) 00:20:02.334 fused_ordering(17) 00:20:02.334 fused_ordering(18) 00:20:02.334 fused_ordering(19) 00:20:02.334 fused_ordering(20) 00:20:02.334 fused_ordering(21) 00:20:02.334 fused_ordering(22) 00:20:02.334 fused_ordering(23) 00:20:02.334 fused_ordering(24) 00:20:02.334 fused_ordering(25) 00:20:02.334 fused_ordering(26) 00:20:02.334 fused_ordering(27) 00:20:02.334 fused_ordering(28) 00:20:02.334 fused_ordering(29) 00:20:02.334 fused_ordering(30) 00:20:02.334 fused_ordering(31) 00:20:02.334 fused_ordering(32) 00:20:02.334 fused_ordering(33) 00:20:02.334 fused_ordering(34) 00:20:02.334 fused_ordering(35) 00:20:02.334 fused_ordering(36) 00:20:02.334 fused_ordering(37) 00:20:02.334 fused_ordering(38) 00:20:02.334 fused_ordering(39) 00:20:02.334 fused_ordering(40) 00:20:02.334 fused_ordering(41) 00:20:02.334 fused_ordering(42) 00:20:02.334 fused_ordering(43) 00:20:02.334 fused_ordering(44) 00:20:02.334 fused_ordering(45) 00:20:02.334 fused_ordering(46) 00:20:02.334 fused_ordering(47) 00:20:02.334 fused_ordering(48) 00:20:02.334 fused_ordering(49) 00:20:02.334 fused_ordering(50) 00:20:02.334 fused_ordering(51) 00:20:02.334 fused_ordering(52) 00:20:02.334 fused_ordering(53) 00:20:02.334 fused_ordering(54) 00:20:02.334 fused_ordering(55) 00:20:02.334 fused_ordering(56) 00:20:02.334 fused_ordering(57) 00:20:02.334 fused_ordering(58) 00:20:02.334 fused_ordering(59) 00:20:02.334 fused_ordering(60) 00:20:02.334 fused_ordering(61) 00:20:02.334 fused_ordering(62) 00:20:02.334 fused_ordering(63) 00:20:02.334 fused_ordering(64) 00:20:02.334 fused_ordering(65) 00:20:02.334 fused_ordering(66) 00:20:02.334 fused_ordering(67) 00:20:02.334 fused_ordering(68) 00:20:02.334 fused_ordering(69) 00:20:02.334 fused_ordering(70) 00:20:02.334 fused_ordering(71) 00:20:02.334 fused_ordering(72) 00:20:02.334 fused_ordering(73) 00:20:02.334 fused_ordering(74) 00:20:02.334 fused_ordering(75) 00:20:02.334 fused_ordering(76) 00:20:02.334 fused_ordering(77) 00:20:02.334 fused_ordering(78) 00:20:02.334 fused_ordering(79) 00:20:02.334 fused_ordering(80) 00:20:02.334 fused_ordering(81) 00:20:02.334 fused_ordering(82) 00:20:02.334 fused_ordering(83) 00:20:02.334 fused_ordering(84) 00:20:02.334 fused_ordering(85) 00:20:02.334 fused_ordering(86) 00:20:02.334 fused_ordering(87) 00:20:02.334 fused_ordering(88) 00:20:02.334 fused_ordering(89) 00:20:02.335 fused_ordering(90) 00:20:02.335 fused_ordering(91) 00:20:02.335 fused_ordering(92) 00:20:02.335 fused_ordering(93) 00:20:02.335 fused_ordering(94) 00:20:02.335 fused_ordering(95) 00:20:02.335 fused_ordering(96) 00:20:02.335 fused_ordering(97) 00:20:02.335 fused_ordering(98) 00:20:02.335 fused_ordering(99) 00:20:02.335 fused_ordering(100) 00:20:02.335 fused_ordering(101) 00:20:02.335 fused_ordering(102) 00:20:02.335 fused_ordering(103) 00:20:02.335 fused_ordering(104) 00:20:02.335 fused_ordering(105) 00:20:02.335 fused_ordering(106) 00:20:02.335 fused_ordering(107) 00:20:02.335 fused_ordering(108) 00:20:02.335 fused_ordering(109) 00:20:02.335 fused_ordering(110) 00:20:02.335 fused_ordering(111) 00:20:02.335 fused_ordering(112) 00:20:02.335 fused_ordering(113) 00:20:02.335 fused_ordering(114) 00:20:02.335 fused_ordering(115) 00:20:02.335 fused_ordering(116) 00:20:02.335 fused_ordering(117) 00:20:02.335 fused_ordering(118) 00:20:02.335 fused_ordering(119) 00:20:02.335 fused_ordering(120) 00:20:02.335 fused_ordering(121) 00:20:02.335 fused_ordering(122) 00:20:02.335 fused_ordering(123) 00:20:02.335 fused_ordering(124) 00:20:02.335 fused_ordering(125) 00:20:02.335 fused_ordering(126) 00:20:02.335 fused_ordering(127) 00:20:02.335 fused_ordering(128) 00:20:02.335 fused_ordering(129) 00:20:02.335 fused_ordering(130) 00:20:02.335 fused_ordering(131) 00:20:02.335 fused_ordering(132) 00:20:02.335 fused_ordering(133) 00:20:02.335 fused_ordering(134) 00:20:02.335 fused_ordering(135) 00:20:02.335 fused_ordering(136) 00:20:02.335 fused_ordering(137) 00:20:02.335 fused_ordering(138) 00:20:02.335 fused_ordering(139) 00:20:02.335 fused_ordering(140) 00:20:02.335 fused_ordering(141) 00:20:02.335 fused_ordering(142) 00:20:02.335 fused_ordering(143) 00:20:02.335 fused_ordering(144) 00:20:02.335 fused_ordering(145) 00:20:02.335 fused_ordering(146) 00:20:02.335 fused_ordering(147) 00:20:02.335 fused_ordering(148) 00:20:02.335 fused_ordering(149) 00:20:02.335 fused_ordering(150) 00:20:02.335 fused_ordering(151) 00:20:02.335 fused_ordering(152) 00:20:02.335 fused_ordering(153) 00:20:02.335 fused_ordering(154) 00:20:02.335 fused_ordering(155) 00:20:02.335 fused_ordering(156) 00:20:02.335 fused_ordering(157) 00:20:02.335 fused_ordering(158) 00:20:02.335 fused_ordering(159) 00:20:02.335 fused_ordering(160) 00:20:02.335 fused_ordering(161) 00:20:02.335 fused_ordering(162) 00:20:02.335 fused_ordering(163) 00:20:02.335 fused_ordering(164) 00:20:02.335 fused_ordering(165) 00:20:02.335 fused_ordering(166) 00:20:02.335 fused_ordering(167) 00:20:02.335 fused_ordering(168) 00:20:02.335 fused_ordering(169) 00:20:02.335 fused_ordering(170) 00:20:02.335 fused_ordering(171) 00:20:02.335 fused_ordering(172) 00:20:02.335 fused_ordering(173) 00:20:02.335 fused_ordering(174) 00:20:02.335 fused_ordering(175) 00:20:02.335 fused_ordering(176) 00:20:02.335 fused_ordering(177) 00:20:02.335 fused_ordering(178) 00:20:02.335 fused_ordering(179) 00:20:02.335 fused_ordering(180) 00:20:02.335 fused_ordering(181) 00:20:02.335 fused_ordering(182) 00:20:02.335 fused_ordering(183) 00:20:02.335 fused_ordering(184) 00:20:02.335 fused_ordering(185) 00:20:02.335 fused_ordering(186) 00:20:02.335 fused_ordering(187) 00:20:02.335 fused_ordering(188) 00:20:02.335 fused_ordering(189) 00:20:02.335 fused_ordering(190) 00:20:02.335 fused_ordering(191) 00:20:02.335 fused_ordering(192) 00:20:02.335 fused_ordering(193) 00:20:02.335 fused_ordering(194) 00:20:02.335 fused_ordering(195) 00:20:02.335 fused_ordering(196) 00:20:02.335 fused_ordering(197) 00:20:02.335 fused_ordering(198) 00:20:02.335 fused_ordering(199) 00:20:02.335 fused_ordering(200) 00:20:02.335 fused_ordering(201) 00:20:02.335 fused_ordering(202) 00:20:02.335 fused_ordering(203) 00:20:02.335 fused_ordering(204) 00:20:02.335 fused_ordering(205) 00:20:02.594 fused_ordering(206) 00:20:02.594 fused_ordering(207) 00:20:02.594 fused_ordering(208) 00:20:02.594 fused_ordering(209) 00:20:02.594 fused_ordering(210) 00:20:02.594 fused_ordering(211) 00:20:02.594 fused_ordering(212) 00:20:02.594 fused_ordering(213) 00:20:02.594 fused_ordering(214) 00:20:02.594 fused_ordering(215) 00:20:02.594 fused_ordering(216) 00:20:02.594 fused_ordering(217) 00:20:02.594 fused_ordering(218) 00:20:02.594 fused_ordering(219) 00:20:02.594 fused_ordering(220) 00:20:02.594 fused_ordering(221) 00:20:02.594 fused_ordering(222) 00:20:02.594 fused_ordering(223) 00:20:02.594 fused_ordering(224) 00:20:02.594 fused_ordering(225) 00:20:02.594 fused_ordering(226) 00:20:02.594 fused_ordering(227) 00:20:02.594 fused_ordering(228) 00:20:02.594 fused_ordering(229) 00:20:02.594 fused_ordering(230) 00:20:02.594 fused_ordering(231) 00:20:02.594 fused_ordering(232) 00:20:02.594 fused_ordering(233) 00:20:02.594 fused_ordering(234) 00:20:02.594 fused_ordering(235) 00:20:02.594 fused_ordering(236) 00:20:02.594 fused_ordering(237) 00:20:02.594 fused_ordering(238) 00:20:02.594 fused_ordering(239) 00:20:02.594 fused_ordering(240) 00:20:02.594 fused_ordering(241) 00:20:02.594 fused_ordering(242) 00:20:02.594 fused_ordering(243) 00:20:02.594 fused_ordering(244) 00:20:02.594 fused_ordering(245) 00:20:02.594 fused_ordering(246) 00:20:02.594 fused_ordering(247) 00:20:02.594 fused_ordering(248) 00:20:02.594 fused_ordering(249) 00:20:02.594 fused_ordering(250) 00:20:02.594 fused_ordering(251) 00:20:02.594 fused_ordering(252) 00:20:02.594 fused_ordering(253) 00:20:02.594 fused_ordering(254) 00:20:02.594 fused_ordering(255) 00:20:02.594 fused_ordering(256) 00:20:02.594 fused_ordering(257) 00:20:02.594 fused_ordering(258) 00:20:02.594 fused_ordering(259) 00:20:02.594 fused_ordering(260) 00:20:02.594 fused_ordering(261) 00:20:02.594 fused_ordering(262) 00:20:02.594 fused_ordering(263) 00:20:02.594 fused_ordering(264) 00:20:02.594 fused_ordering(265) 00:20:02.594 fused_ordering(266) 00:20:02.594 fused_ordering(267) 00:20:02.594 fused_ordering(268) 00:20:02.594 fused_ordering(269) 00:20:02.594 fused_ordering(270) 00:20:02.594 fused_ordering(271) 00:20:02.594 fused_ordering(272) 00:20:02.594 fused_ordering(273) 00:20:02.594 fused_ordering(274) 00:20:02.594 fused_ordering(275) 00:20:02.594 fused_ordering(276) 00:20:02.594 fused_ordering(277) 00:20:02.594 fused_ordering(278) 00:20:02.594 fused_ordering(279) 00:20:02.594 fused_ordering(280) 00:20:02.594 fused_ordering(281) 00:20:02.594 fused_ordering(282) 00:20:02.594 fused_ordering(283) 00:20:02.594 fused_ordering(284) 00:20:02.594 fused_ordering(285) 00:20:02.594 fused_ordering(286) 00:20:02.594 fused_ordering(287) 00:20:02.594 fused_ordering(288) 00:20:02.594 fused_ordering(289) 00:20:02.594 fused_ordering(290) 00:20:02.594 fused_ordering(291) 00:20:02.594 fused_ordering(292) 00:20:02.594 fused_ordering(293) 00:20:02.594 fused_ordering(294) 00:20:02.594 fused_ordering(295) 00:20:02.594 fused_ordering(296) 00:20:02.594 fused_ordering(297) 00:20:02.594 fused_ordering(298) 00:20:02.594 fused_ordering(299) 00:20:02.594 fused_ordering(300) 00:20:02.594 fused_ordering(301) 00:20:02.594 fused_ordering(302) 00:20:02.594 fused_ordering(303) 00:20:02.594 fused_ordering(304) 00:20:02.594 fused_ordering(305) 00:20:02.594 fused_ordering(306) 00:20:02.594 fused_ordering(307) 00:20:02.594 fused_ordering(308) 00:20:02.594 fused_ordering(309) 00:20:02.594 fused_ordering(310) 00:20:02.594 fused_ordering(311) 00:20:02.594 fused_ordering(312) 00:20:02.594 fused_ordering(313) 00:20:02.595 fused_ordering(314) 00:20:02.595 fused_ordering(315) 00:20:02.595 fused_ordering(316) 00:20:02.595 fused_ordering(317) 00:20:02.595 fused_ordering(318) 00:20:02.595 fused_ordering(319) 00:20:02.595 fused_ordering(320) 00:20:02.595 fused_ordering(321) 00:20:02.595 fused_ordering(322) 00:20:02.595 fused_ordering(323) 00:20:02.595 fused_ordering(324) 00:20:02.595 fused_ordering(325) 00:20:02.595 fused_ordering(326) 00:20:02.595 fused_ordering(327) 00:20:02.595 fused_ordering(328) 00:20:02.595 fused_ordering(329) 00:20:02.595 fused_ordering(330) 00:20:02.595 fused_ordering(331) 00:20:02.595 fused_ordering(332) 00:20:02.595 fused_ordering(333) 00:20:02.595 fused_ordering(334) 00:20:02.595 fused_ordering(335) 00:20:02.595 fused_ordering(336) 00:20:02.595 fused_ordering(337) 00:20:02.595 fused_ordering(338) 00:20:02.595 fused_ordering(339) 00:20:02.595 fused_ordering(340) 00:20:02.595 fused_ordering(341) 00:20:02.595 fused_ordering(342) 00:20:02.595 fused_ordering(343) 00:20:02.595 fused_ordering(344) 00:20:02.595 fused_ordering(345) 00:20:02.595 fused_ordering(346) 00:20:02.595 fused_ordering(347) 00:20:02.595 fused_ordering(348) 00:20:02.595 fused_ordering(349) 00:20:02.595 fused_ordering(350) 00:20:02.595 fused_ordering(351) 00:20:02.595 fused_ordering(352) 00:20:02.595 fused_ordering(353) 00:20:02.595 fused_ordering(354) 00:20:02.595 fused_ordering(355) 00:20:02.595 fused_ordering(356) 00:20:02.595 fused_ordering(357) 00:20:02.595 fused_ordering(358) 00:20:02.595 fused_ordering(359) 00:20:02.595 fused_ordering(360) 00:20:02.595 fused_ordering(361) 00:20:02.595 fused_ordering(362) 00:20:02.595 fused_ordering(363) 00:20:02.595 fused_ordering(364) 00:20:02.595 fused_ordering(365) 00:20:02.595 fused_ordering(366) 00:20:02.595 fused_ordering(367) 00:20:02.595 fused_ordering(368) 00:20:02.595 fused_ordering(369) 00:20:02.595 fused_ordering(370) 00:20:02.595 fused_ordering(371) 00:20:02.595 fused_ordering(372) 00:20:02.595 fused_ordering(373) 00:20:02.595 fused_ordering(374) 00:20:02.595 fused_ordering(375) 00:20:02.595 fused_ordering(376) 00:20:02.595 fused_ordering(377) 00:20:02.595 fused_ordering(378) 00:20:02.595 fused_ordering(379) 00:20:02.595 fused_ordering(380) 00:20:02.595 fused_ordering(381) 00:20:02.595 fused_ordering(382) 00:20:02.595 fused_ordering(383) 00:20:02.595 fused_ordering(384) 00:20:02.595 fused_ordering(385) 00:20:02.595 fused_ordering(386) 00:20:02.595 fused_ordering(387) 00:20:02.595 fused_ordering(388) 00:20:02.595 fused_ordering(389) 00:20:02.595 fused_ordering(390) 00:20:02.595 fused_ordering(391) 00:20:02.595 fused_ordering(392) 00:20:02.595 fused_ordering(393) 00:20:02.595 fused_ordering(394) 00:20:02.595 fused_ordering(395) 00:20:02.595 fused_ordering(396) 00:20:02.595 fused_ordering(397) 00:20:02.595 fused_ordering(398) 00:20:02.595 fused_ordering(399) 00:20:02.595 fused_ordering(400) 00:20:02.595 fused_ordering(401) 00:20:02.595 fused_ordering(402) 00:20:02.595 fused_ordering(403) 00:20:02.595 fused_ordering(404) 00:20:02.595 fused_ordering(405) 00:20:02.595 fused_ordering(406) 00:20:02.595 fused_ordering(407) 00:20:02.595 fused_ordering(408) 00:20:02.595 fused_ordering(409) 00:20:02.595 fused_ordering(410) 00:20:02.853 fused_ordering(411) 00:20:02.853 fused_ordering(412) 00:20:02.853 fused_ordering(413) 00:20:02.853 fused_ordering(414) 00:20:02.853 fused_ordering(415) 00:20:02.853 fused_ordering(416) 00:20:02.853 fused_ordering(417) 00:20:02.853 fused_ordering(418) 00:20:02.853 fused_ordering(419) 00:20:02.853 fused_ordering(420) 00:20:02.853 fused_ordering(421) 00:20:02.853 fused_ordering(422) 00:20:02.853 fused_ordering(423) 00:20:02.853 fused_ordering(424) 00:20:02.853 fused_ordering(425) 00:20:02.853 fused_ordering(426) 00:20:02.853 fused_ordering(427) 00:20:02.853 fused_ordering(428) 00:20:02.853 fused_ordering(429) 00:20:02.853 fused_ordering(430) 00:20:02.853 fused_ordering(431) 00:20:02.853 fused_ordering(432) 00:20:02.853 fused_ordering(433) 00:20:02.853 fused_ordering(434) 00:20:02.853 fused_ordering(435) 00:20:02.853 fused_ordering(436) 00:20:02.853 fused_ordering(437) 00:20:02.853 fused_ordering(438) 00:20:02.853 fused_ordering(439) 00:20:02.853 fused_ordering(440) 00:20:02.853 fused_ordering(441) 00:20:02.853 fused_ordering(442) 00:20:02.853 fused_ordering(443) 00:20:02.853 fused_ordering(444) 00:20:02.853 fused_ordering(445) 00:20:02.853 fused_ordering(446) 00:20:02.853 fused_ordering(447) 00:20:02.853 fused_ordering(448) 00:20:02.853 fused_ordering(449) 00:20:02.853 fused_ordering(450) 00:20:02.853 fused_ordering(451) 00:20:02.853 fused_ordering(452) 00:20:02.853 fused_ordering(453) 00:20:02.853 fused_ordering(454) 00:20:02.853 fused_ordering(455) 00:20:02.853 fused_ordering(456) 00:20:02.853 fused_ordering(457) 00:20:02.853 fused_ordering(458) 00:20:02.853 fused_ordering(459) 00:20:02.853 fused_ordering(460) 00:20:02.853 fused_ordering(461) 00:20:02.853 fused_ordering(462) 00:20:02.853 fused_ordering(463) 00:20:02.853 fused_ordering(464) 00:20:02.853 fused_ordering(465) 00:20:02.853 fused_ordering(466) 00:20:02.853 fused_ordering(467) 00:20:02.853 fused_ordering(468) 00:20:02.853 fused_ordering(469) 00:20:02.853 fused_ordering(470) 00:20:02.853 fused_ordering(471) 00:20:02.853 fused_ordering(472) 00:20:02.853 fused_ordering(473) 00:20:02.853 fused_ordering(474) 00:20:02.853 fused_ordering(475) 00:20:02.853 fused_ordering(476) 00:20:02.853 fused_ordering(477) 00:20:02.853 fused_ordering(478) 00:20:02.853 fused_ordering(479) 00:20:02.853 fused_ordering(480) 00:20:02.853 fused_ordering(481) 00:20:02.853 fused_ordering(482) 00:20:02.853 fused_ordering(483) 00:20:02.853 fused_ordering(484) 00:20:02.853 fused_ordering(485) 00:20:02.853 fused_ordering(486) 00:20:02.853 fused_ordering(487) 00:20:02.853 fused_ordering(488) 00:20:02.853 fused_ordering(489) 00:20:02.853 fused_ordering(490) 00:20:02.853 fused_ordering(491) 00:20:02.853 fused_ordering(492) 00:20:02.853 fused_ordering(493) 00:20:02.853 fused_ordering(494) 00:20:02.853 fused_ordering(495) 00:20:02.853 fused_ordering(496) 00:20:02.853 fused_ordering(497) 00:20:02.853 fused_ordering(498) 00:20:02.853 fused_ordering(499) 00:20:02.853 fused_ordering(500) 00:20:02.853 fused_ordering(501) 00:20:02.853 fused_ordering(502) 00:20:02.853 fused_ordering(503) 00:20:02.853 fused_ordering(504) 00:20:02.853 fused_ordering(505) 00:20:02.853 fused_ordering(506) 00:20:02.853 fused_ordering(507) 00:20:02.853 fused_ordering(508) 00:20:02.853 fused_ordering(509) 00:20:02.853 fused_ordering(510) 00:20:02.853 fused_ordering(511) 00:20:02.853 fused_ordering(512) 00:20:02.853 fused_ordering(513) 00:20:02.853 fused_ordering(514) 00:20:02.853 fused_ordering(515) 00:20:02.853 fused_ordering(516) 00:20:02.853 fused_ordering(517) 00:20:02.853 fused_ordering(518) 00:20:02.853 fused_ordering(519) 00:20:02.853 fused_ordering(520) 00:20:02.853 fused_ordering(521) 00:20:02.853 fused_ordering(522) 00:20:02.853 fused_ordering(523) 00:20:02.853 fused_ordering(524) 00:20:02.853 fused_ordering(525) 00:20:02.853 fused_ordering(526) 00:20:02.853 fused_ordering(527) 00:20:02.853 fused_ordering(528) 00:20:02.853 fused_ordering(529) 00:20:02.853 fused_ordering(530) 00:20:02.853 fused_ordering(531) 00:20:02.853 fused_ordering(532) 00:20:02.853 fused_ordering(533) 00:20:02.853 fused_ordering(534) 00:20:02.853 fused_ordering(535) 00:20:02.853 fused_ordering(536) 00:20:02.853 fused_ordering(537) 00:20:02.853 fused_ordering(538) 00:20:02.853 fused_ordering(539) 00:20:02.853 fused_ordering(540) 00:20:02.853 fused_ordering(541) 00:20:02.853 fused_ordering(542) 00:20:02.853 fused_ordering(543) 00:20:02.853 fused_ordering(544) 00:20:02.853 fused_ordering(545) 00:20:02.853 fused_ordering(546) 00:20:02.853 fused_ordering(547) 00:20:02.853 fused_ordering(548) 00:20:02.853 fused_ordering(549) 00:20:02.853 fused_ordering(550) 00:20:02.853 fused_ordering(551) 00:20:02.853 fused_ordering(552) 00:20:02.853 fused_ordering(553) 00:20:02.853 fused_ordering(554) 00:20:02.853 fused_ordering(555) 00:20:02.853 fused_ordering(556) 00:20:02.853 fused_ordering(557) 00:20:02.853 fused_ordering(558) 00:20:02.853 fused_ordering(559) 00:20:02.853 fused_ordering(560) 00:20:02.853 fused_ordering(561) 00:20:02.853 fused_ordering(562) 00:20:02.853 fused_ordering(563) 00:20:02.853 fused_ordering(564) 00:20:02.853 fused_ordering(565) 00:20:02.853 fused_ordering(566) 00:20:02.853 fused_ordering(567) 00:20:02.853 fused_ordering(568) 00:20:02.853 fused_ordering(569) 00:20:02.853 fused_ordering(570) 00:20:02.853 fused_ordering(571) 00:20:02.853 fused_ordering(572) 00:20:02.853 fused_ordering(573) 00:20:02.853 fused_ordering(574) 00:20:02.853 fused_ordering(575) 00:20:02.853 fused_ordering(576) 00:20:02.853 fused_ordering(577) 00:20:02.853 fused_ordering(578) 00:20:02.853 fused_ordering(579) 00:20:02.853 fused_ordering(580) 00:20:02.853 fused_ordering(581) 00:20:02.853 fused_ordering(582) 00:20:02.853 fused_ordering(583) 00:20:02.853 fused_ordering(584) 00:20:02.853 fused_ordering(585) 00:20:02.853 fused_ordering(586) 00:20:02.853 fused_ordering(587) 00:20:02.853 fused_ordering(588) 00:20:02.853 fused_ordering(589) 00:20:02.853 fused_ordering(590) 00:20:02.853 fused_ordering(591) 00:20:02.853 fused_ordering(592) 00:20:02.853 fused_ordering(593) 00:20:02.853 fused_ordering(594) 00:20:02.853 fused_ordering(595) 00:20:02.853 fused_ordering(596) 00:20:02.853 fused_ordering(597) 00:20:02.853 fused_ordering(598) 00:20:02.853 fused_ordering(599) 00:20:02.853 fused_ordering(600) 00:20:02.853 fused_ordering(601) 00:20:02.853 fused_ordering(602) 00:20:02.853 fused_ordering(603) 00:20:02.853 fused_ordering(604) 00:20:02.853 fused_ordering(605) 00:20:02.853 fused_ordering(606) 00:20:02.853 fused_ordering(607) 00:20:02.853 fused_ordering(608) 00:20:02.853 fused_ordering(609) 00:20:02.853 fused_ordering(610) 00:20:02.853 fused_ordering(611) 00:20:02.853 fused_ordering(612) 00:20:02.854 fused_ordering(613) 00:20:02.854 fused_ordering(614) 00:20:02.854 fused_ordering(615) 00:20:03.420 fused_ordering(616) 00:20:03.420 fused_ordering(617) 00:20:03.420 fused_ordering(618) 00:20:03.420 fused_ordering(619) 00:20:03.420 fused_ordering(620) 00:20:03.420 fused_ordering(621) 00:20:03.420 fused_ordering(622) 00:20:03.420 fused_ordering(623) 00:20:03.420 fused_ordering(624) 00:20:03.420 fused_ordering(625) 00:20:03.420 fused_ordering(626) 00:20:03.420 fused_ordering(627) 00:20:03.420 fused_ordering(628) 00:20:03.420 fused_ordering(629) 00:20:03.420 fused_ordering(630) 00:20:03.420 fused_ordering(631) 00:20:03.420 fused_ordering(632) 00:20:03.420 fused_ordering(633) 00:20:03.420 fused_ordering(634) 00:20:03.420 fused_ordering(635) 00:20:03.420 fused_ordering(636) 00:20:03.420 fused_ordering(637) 00:20:03.420 fused_ordering(638) 00:20:03.420 fused_ordering(639) 00:20:03.420 fused_ordering(640) 00:20:03.420 fused_ordering(641) 00:20:03.420 fused_ordering(642) 00:20:03.420 fused_ordering(643) 00:20:03.420 fused_ordering(644) 00:20:03.420 fused_ordering(645) 00:20:03.420 fused_ordering(646) 00:20:03.420 fused_ordering(647) 00:20:03.420 fused_ordering(648) 00:20:03.420 fused_ordering(649) 00:20:03.420 fused_ordering(650) 00:20:03.420 fused_ordering(651) 00:20:03.420 fused_ordering(652) 00:20:03.420 fused_ordering(653) 00:20:03.420 fused_ordering(654) 00:20:03.420 fused_ordering(655) 00:20:03.420 fused_ordering(656) 00:20:03.420 fused_ordering(657) 00:20:03.420 fused_ordering(658) 00:20:03.420 fused_ordering(659) 00:20:03.420 fused_ordering(660) 00:20:03.420 fused_ordering(661) 00:20:03.420 fused_ordering(662) 00:20:03.420 fused_ordering(663) 00:20:03.420 fused_ordering(664) 00:20:03.420 fused_ordering(665) 00:20:03.420 fused_ordering(666) 00:20:03.420 fused_ordering(667) 00:20:03.420 fused_ordering(668) 00:20:03.420 fused_ordering(669) 00:20:03.420 fused_ordering(670) 00:20:03.420 fused_ordering(671) 00:20:03.420 fused_ordering(672) 00:20:03.420 fused_ordering(673) 00:20:03.420 fused_ordering(674) 00:20:03.420 fused_ordering(675) 00:20:03.420 fused_ordering(676) 00:20:03.420 fused_ordering(677) 00:20:03.420 fused_ordering(678) 00:20:03.420 fused_ordering(679) 00:20:03.420 fused_ordering(680) 00:20:03.420 fused_ordering(681) 00:20:03.420 fused_ordering(682) 00:20:03.420 fused_ordering(683) 00:20:03.420 fused_ordering(684) 00:20:03.420 fused_ordering(685) 00:20:03.420 fused_ordering(686) 00:20:03.420 fused_ordering(687) 00:20:03.420 fused_ordering(688) 00:20:03.420 fused_ordering(689) 00:20:03.420 fused_ordering(690) 00:20:03.420 fused_ordering(691) 00:20:03.420 fused_ordering(692) 00:20:03.420 fused_ordering(693) 00:20:03.420 fused_ordering(694) 00:20:03.420 fused_ordering(695) 00:20:03.420 fused_ordering(696) 00:20:03.420 fused_ordering(697) 00:20:03.420 fused_ordering(698) 00:20:03.420 fused_ordering(699) 00:20:03.420 fused_ordering(700) 00:20:03.420 fused_ordering(701) 00:20:03.420 fused_ordering(702) 00:20:03.420 fused_ordering(703) 00:20:03.420 fused_ordering(704) 00:20:03.420 fused_ordering(705) 00:20:03.420 fused_ordering(706) 00:20:03.420 fused_ordering(707) 00:20:03.420 fused_ordering(708) 00:20:03.420 fused_ordering(709) 00:20:03.420 fused_ordering(710) 00:20:03.420 fused_ordering(711) 00:20:03.420 fused_ordering(712) 00:20:03.420 fused_ordering(713) 00:20:03.420 fused_ordering(714) 00:20:03.420 fused_ordering(715) 00:20:03.420 fused_ordering(716) 00:20:03.420 fused_ordering(717) 00:20:03.420 fused_ordering(718) 00:20:03.420 fused_ordering(719) 00:20:03.420 fused_ordering(720) 00:20:03.420 fused_ordering(721) 00:20:03.420 fused_ordering(722) 00:20:03.420 fused_ordering(723) 00:20:03.420 fused_ordering(724) 00:20:03.420 fused_ordering(725) 00:20:03.420 fused_ordering(726) 00:20:03.420 fused_ordering(727) 00:20:03.420 fused_ordering(728) 00:20:03.421 fused_ordering(729) 00:20:03.421 fused_ordering(730) 00:20:03.421 fused_ordering(731) 00:20:03.421 fused_ordering(732) 00:20:03.421 fused_ordering(733) 00:20:03.421 fused_ordering(734) 00:20:03.421 fused_ordering(735) 00:20:03.421 fused_ordering(736) 00:20:03.421 fused_ordering(737) 00:20:03.421 fused_ordering(738) 00:20:03.421 fused_ordering(739) 00:20:03.421 fused_ordering(740) 00:20:03.421 fused_ordering(741) 00:20:03.421 fused_ordering(742) 00:20:03.421 fused_ordering(743) 00:20:03.421 fused_ordering(744) 00:20:03.421 fused_ordering(745) 00:20:03.421 fused_ordering(746) 00:20:03.421 fused_ordering(747) 00:20:03.421 fused_ordering(748) 00:20:03.421 fused_ordering(749) 00:20:03.421 fused_ordering(750) 00:20:03.421 fused_ordering(751) 00:20:03.421 fused_ordering(752) 00:20:03.421 fused_ordering(753) 00:20:03.421 fused_ordering(754) 00:20:03.421 fused_ordering(755) 00:20:03.421 fused_ordering(756) 00:20:03.421 fused_ordering(757) 00:20:03.421 fused_ordering(758) 00:20:03.421 fused_ordering(759) 00:20:03.421 fused_ordering(760) 00:20:03.421 fused_ordering(761) 00:20:03.421 fused_ordering(762) 00:20:03.421 fused_ordering(763) 00:20:03.421 fused_ordering(764) 00:20:03.421 fused_ordering(765) 00:20:03.421 fused_ordering(766) 00:20:03.421 fused_ordering(767) 00:20:03.421 fused_ordering(768) 00:20:03.421 fused_ordering(769) 00:20:03.421 fused_ordering(770) 00:20:03.421 fused_ordering(771) 00:20:03.421 fused_ordering(772) 00:20:03.421 fused_ordering(773) 00:20:03.421 fused_ordering(774) 00:20:03.421 fused_ordering(775) 00:20:03.421 fused_ordering(776) 00:20:03.421 fused_ordering(777) 00:20:03.421 fused_ordering(778) 00:20:03.421 fused_ordering(779) 00:20:03.421 fused_ordering(780) 00:20:03.421 fused_ordering(781) 00:20:03.421 fused_ordering(782) 00:20:03.421 fused_ordering(783) 00:20:03.421 fused_ordering(784) 00:20:03.421 fused_ordering(785) 00:20:03.421 fused_ordering(786) 00:20:03.421 fused_ordering(787) 00:20:03.421 fused_ordering(788) 00:20:03.421 fused_ordering(789) 00:20:03.421 fused_ordering(790) 00:20:03.421 fused_ordering(791) 00:20:03.421 fused_ordering(792) 00:20:03.421 fused_ordering(793) 00:20:03.421 fused_ordering(794) 00:20:03.421 fused_ordering(795) 00:20:03.421 fused_ordering(796) 00:20:03.421 fused_ordering(797) 00:20:03.421 fused_ordering(798) 00:20:03.421 fused_ordering(799) 00:20:03.421 fused_ordering(800) 00:20:03.421 fused_ordering(801) 00:20:03.421 fused_ordering(802) 00:20:03.421 fused_ordering(803) 00:20:03.421 fused_ordering(804) 00:20:03.421 fused_ordering(805) 00:20:03.421 fused_ordering(806) 00:20:03.421 fused_ordering(807) 00:20:03.421 fused_ordering(808) 00:20:03.421 fused_ordering(809) 00:20:03.421 fused_ordering(810) 00:20:03.421 fused_ordering(811) 00:20:03.421 fused_ordering(812) 00:20:03.421 fused_ordering(813) 00:20:03.421 fused_ordering(814) 00:20:03.421 fused_ordering(815) 00:20:03.421 fused_ordering(816) 00:20:03.421 fused_ordering(817) 00:20:03.421 fused_ordering(818) 00:20:03.421 fused_ordering(819) 00:20:03.421 fused_ordering(820) 00:20:03.679 fused_ordering(821) 00:20:03.679 fused_ordering(822) 00:20:03.679 fused_ordering(823) 00:20:03.679 fused_ordering(824) 00:20:03.679 fused_ordering(825) 00:20:03.679 fused_ordering(826) 00:20:03.679 fused_ordering(827) 00:20:03.679 fused_ordering(828) 00:20:03.679 fused_ordering(829) 00:20:03.679 fused_ordering(830) 00:20:03.679 fused_ordering(831) 00:20:03.679 fused_ordering(832) 00:20:03.679 fused_ordering(833) 00:20:03.679 fused_ordering(834) 00:20:03.679 fused_ordering(835) 00:20:03.679 fused_ordering(836) 00:20:03.679 fused_ordering(837) 00:20:03.679 fused_ordering(838) 00:20:03.679 fused_ordering(839) 00:20:03.679 fused_ordering(840) 00:20:03.679 fused_ordering(841) 00:20:03.679 fused_ordering(842) 00:20:03.679 fused_ordering(843) 00:20:03.679 fused_ordering(844) 00:20:03.679 fused_ordering(845) 00:20:03.679 fused_ordering(846) 00:20:03.679 fused_ordering(847) 00:20:03.679 fused_ordering(848) 00:20:03.679 fused_ordering(849) 00:20:03.679 fused_ordering(850) 00:20:03.679 fused_ordering(851) 00:20:03.679 fused_ordering(852) 00:20:03.679 fused_ordering(853) 00:20:03.679 fused_ordering(854) 00:20:03.679 fused_ordering(855) 00:20:03.679 fused_ordering(856) 00:20:03.679 fused_ordering(857) 00:20:03.679 fused_ordering(858) 00:20:03.679 fused_ordering(859) 00:20:03.679 fused_ordering(860) 00:20:03.679 fused_ordering(861) 00:20:03.679 fused_ordering(862) 00:20:03.679 fused_ordering(863) 00:20:03.679 fused_ordering(864) 00:20:03.679 fused_ordering(865) 00:20:03.679 fused_ordering(866) 00:20:03.679 fused_ordering(867) 00:20:03.679 fused_ordering(868) 00:20:03.679 fused_ordering(869) 00:20:03.679 fused_ordering(870) 00:20:03.679 fused_ordering(871) 00:20:03.679 fused_ordering(872) 00:20:03.679 fused_ordering(873) 00:20:03.679 fused_ordering(874) 00:20:03.679 fused_ordering(875) 00:20:03.679 fused_ordering(876) 00:20:03.679 fused_ordering(877) 00:20:03.679 fused_ordering(878) 00:20:03.679 fused_ordering(879) 00:20:03.679 fused_ordering(880) 00:20:03.679 fused_ordering(881) 00:20:03.679 fused_ordering(882) 00:20:03.679 fused_ordering(883) 00:20:03.679 fused_ordering(884) 00:20:03.679 fused_ordering(885) 00:20:03.679 fused_ordering(886) 00:20:03.679 fused_ordering(887) 00:20:03.679 fused_ordering(888) 00:20:03.679 fused_ordering(889) 00:20:03.679 fused_ordering(890) 00:20:03.679 fused_ordering(891) 00:20:03.679 fused_ordering(892) 00:20:03.679 fused_ordering(893) 00:20:03.679 fused_ordering(894) 00:20:03.679 fused_ordering(895) 00:20:03.679 fused_ordering(896) 00:20:03.679 fused_ordering(897) 00:20:03.679 fused_ordering(898) 00:20:03.679 fused_ordering(899) 00:20:03.679 fused_ordering(900) 00:20:03.679 fused_ordering(901) 00:20:03.679 fused_ordering(902) 00:20:03.679 fused_ordering(903) 00:20:03.679 fused_ordering(904) 00:20:03.679 fused_ordering(905) 00:20:03.679 fused_ordering(906) 00:20:03.679 fused_ordering(907) 00:20:03.679 fused_ordering(908) 00:20:03.679 fused_ordering(909) 00:20:03.679 fused_ordering(910) 00:20:03.679 fused_ordering(911) 00:20:03.679 fused_ordering(912) 00:20:03.679 fused_ordering(913) 00:20:03.679 fused_ordering(914) 00:20:03.679 fused_ordering(915) 00:20:03.679 fused_ordering(916) 00:20:03.679 fused_ordering(917) 00:20:03.679 fused_ordering(918) 00:20:03.679 fused_ordering(919) 00:20:03.679 fused_ordering(920) 00:20:03.679 fused_ordering(921) 00:20:03.679 fused_ordering(922) 00:20:03.679 fused_ordering(923) 00:20:03.679 fused_ordering(924) 00:20:03.679 fused_ordering(925) 00:20:03.679 fused_ordering(926) 00:20:03.679 fused_ordering(927) 00:20:03.679 fused_ordering(928) 00:20:03.679 fused_ordering(929) 00:20:03.679 fused_ordering(930) 00:20:03.679 fused_ordering(931) 00:20:03.679 fused_ordering(932) 00:20:03.679 fused_ordering(933) 00:20:03.679 fused_ordering(934) 00:20:03.679 fused_ordering(935) 00:20:03.679 fused_ordering(936) 00:20:03.679 fused_ordering(937) 00:20:03.679 fused_ordering(938) 00:20:03.679 fused_ordering(939) 00:20:03.679 fused_ordering(940) 00:20:03.679 fused_ordering(941) 00:20:03.679 fused_ordering(942) 00:20:03.679 fused_ordering(943) 00:20:03.679 fused_ordering(944) 00:20:03.679 fused_ordering(945) 00:20:03.679 fused_ordering(946) 00:20:03.679 fused_ordering(947) 00:20:03.679 fused_ordering(948) 00:20:03.679 fused_ordering(949) 00:20:03.679 fused_ordering(950) 00:20:03.679 fused_ordering(951) 00:20:03.679 fused_ordering(952) 00:20:03.679 fused_ordering(953) 00:20:03.679 fused_ordering(954) 00:20:03.679 fused_ordering(955) 00:20:03.679 fused_ordering(956) 00:20:03.679 fused_ordering(957) 00:20:03.679 fused_ordering(958) 00:20:03.679 fused_ordering(959) 00:20:03.679 fused_ordering(960) 00:20:03.679 fused_ordering(961) 00:20:03.679 fused_ordering(962) 00:20:03.679 fused_ordering(963) 00:20:03.679 fused_ordering(964) 00:20:03.679 fused_ordering(965) 00:20:03.679 fused_ordering(966) 00:20:03.679 fused_ordering(967) 00:20:03.679 fused_ordering(968) 00:20:03.679 fused_ordering(969) 00:20:03.679 fused_ordering(970) 00:20:03.679 fused_ordering(971) 00:20:03.679 fused_ordering(972) 00:20:03.679 fused_ordering(973) 00:20:03.679 fused_ordering(974) 00:20:03.679 fused_ordering(975) 00:20:03.679 fused_ordering(976) 00:20:03.679 fused_ordering(977) 00:20:03.679 fused_ordering(978) 00:20:03.679 fused_ordering(979) 00:20:03.679 fused_ordering(980) 00:20:03.679 fused_ordering(981) 00:20:03.679 fused_ordering(982) 00:20:03.679 fused_ordering(983) 00:20:03.679 fused_ordering(984) 00:20:03.679 fused_ordering(985) 00:20:03.679 fused_ordering(986) 00:20:03.679 fused_ordering(987) 00:20:03.679 fused_ordering(988) 00:20:03.679 fused_ordering(989) 00:20:03.679 fused_ordering(990) 00:20:03.679 fused_ordering(991) 00:20:03.679 fused_ordering(992) 00:20:03.679 fused_ordering(993) 00:20:03.679 fused_ordering(994) 00:20:03.679 fused_ordering(995) 00:20:03.679 fused_ordering(996) 00:20:03.679 fused_ordering(997) 00:20:03.679 fused_ordering(998) 00:20:03.679 fused_ordering(999) 00:20:03.679 fused_ordering(1000) 00:20:03.679 fused_ordering(1001) 00:20:03.679 fused_ordering(1002) 00:20:03.679 fused_ordering(1003) 00:20:03.679 fused_ordering(1004) 00:20:03.679 fused_ordering(1005) 00:20:03.679 fused_ordering(1006) 00:20:03.679 fused_ordering(1007) 00:20:03.679 fused_ordering(1008) 00:20:03.679 fused_ordering(1009) 00:20:03.679 fused_ordering(1010) 00:20:03.679 fused_ordering(1011) 00:20:03.679 fused_ordering(1012) 00:20:03.679 fused_ordering(1013) 00:20:03.679 fused_ordering(1014) 00:20:03.679 fused_ordering(1015) 00:20:03.679 fused_ordering(1016) 00:20:03.679 fused_ordering(1017) 00:20:03.679 fused_ordering(1018) 00:20:03.679 fused_ordering(1019) 00:20:03.679 fused_ordering(1020) 00:20:03.680 fused_ordering(1021) 00:20:03.680 fused_ordering(1022) 00:20:03.680 fused_ordering(1023) 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.680 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.938 rmmod nvme_tcp 00:20:03.938 rmmod nvme_fabrics 00:20:03.938 rmmod nvme_keyring 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 370162 ']' 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 370162 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 370162 ']' 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 370162 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370162 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370162' 00:20:03.938 killing process with pid 370162 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 370162 00:20:03.938 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 370162 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.198 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.138 00:20:06.138 real 0m12.785s 00:20:06.138 user 0m6.226s 00:20:06.138 sys 0m6.970s 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:06.138 ************************************ 00:20:06.138 END TEST nvmf_fused_ordering 00:20:06.138 ************************************ 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.138 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.399 ************************************ 00:20:06.399 START TEST nvmf_ns_masking 00:20:06.399 ************************************ 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:06.399 * Looking for test storage... 00:20:06.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:06.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.399 --rc genhtml_branch_coverage=1 00:20:06.399 --rc genhtml_function_coverage=1 00:20:06.399 --rc genhtml_legend=1 00:20:06.399 --rc geninfo_all_blocks=1 00:20:06.399 --rc geninfo_unexecuted_blocks=1 00:20:06.399 00:20:06.399 ' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:06.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.399 --rc genhtml_branch_coverage=1 00:20:06.399 --rc genhtml_function_coverage=1 00:20:06.399 --rc genhtml_legend=1 00:20:06.399 --rc geninfo_all_blocks=1 00:20:06.399 --rc geninfo_unexecuted_blocks=1 00:20:06.399 00:20:06.399 ' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:06.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.399 --rc genhtml_branch_coverage=1 00:20:06.399 --rc genhtml_function_coverage=1 00:20:06.399 --rc genhtml_legend=1 00:20:06.399 --rc geninfo_all_blocks=1 00:20:06.399 --rc geninfo_unexecuted_blocks=1 00:20:06.399 00:20:06.399 ' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:06.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.399 --rc genhtml_branch_coverage=1 00:20:06.399 --rc genhtml_function_coverage=1 00:20:06.399 --rc genhtml_legend=1 00:20:06.399 --rc geninfo_all_blocks=1 00:20:06.399 --rc geninfo_unexecuted_blocks=1 00:20:06.399 00:20:06.399 ' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.399 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=418500ff-c262-4ba9-abda-f893f238fd97 00:20:06.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=171fc34e-ff35-4a3b-b764-d98c1b959324 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9e16bd07-9e8a-42f7-8cf1-1b69321ce92b 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.661 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:14.796 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:14.796 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:14.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:14.797 Found net devices under 0000:af:00.0: cvl_0_0 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:14.797 Found net devices under 0000:af:00.1: cvl_0_1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.797 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:14.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:20:14.797 00:20:14.797 --- 10.0.0.2 ping statistics --- 00:20:14.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.797 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:20:14.797 00:20:14.797 --- 10.0.0.1 ping statistics --- 00:20:14.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.797 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=374399 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 374399 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 374399 ']' 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.797 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:14.797 [2024-12-10 00:01:58.249646] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:14.797 [2024-12-10 00:01:58.249700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.797 [2024-12-10 00:01:58.345431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.797 [2024-12-10 00:01:58.386063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.797 [2024-12-10 00:01:58.386107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.797 [2024-12-10 00:01:58.386121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.797 [2024-12-10 00:01:58.386151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.797 [2024-12-10 00:01:58.386163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.797 [2024-12-10 00:01:58.386835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:14.797 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.798 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:15.057 [2024-12-10 00:01:59.289775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.057 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:15.057 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:15.057 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:15.057 Malloc1 00:20:15.057 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:15.314 Malloc2 00:20:15.314 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:15.572 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:15.831 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.831 [2024-12-10 00:02:00.265370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.831 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:15.831 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e16bd07-9e8a-42f7-8cf1-1b69321ce92b -a 10.0.0.2 -s 4420 -i 4 00:20:16.090 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:16.090 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:16.090 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:16.090 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:16.090 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:17.991 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:18.250 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:18.250 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:18.250 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:18.251 [ 0]:0x1 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d15314854a314ee9899d33a0dc6ccc0e 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d15314854a314ee9899d33a0dc6ccc0e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:18.251 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:18.509 [ 0]:0x1 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d15314854a314ee9899d33a0dc6ccc0e 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d15314854a314ee9899d33a0dc6ccc0e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:18.509 [ 1]:0x2 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:18.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:18.509 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:18.768 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:19.027 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:19.027 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e16bd07-9e8a-42f7-8cf1-1b69321ce92b -a 10.0.0.2 -s 4420 -i 4 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:20:19.286 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.194 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:21.453 [ 0]:0x2 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.453 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:21.711 [ 0]:0x1 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d15314854a314ee9899d33a0dc6ccc0e 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d15314854a314ee9899d33a0dc6ccc0e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.711 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:21.712 [ 1]:0x2 00:20:21.712 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:21.712 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.712 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:21.712 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.712 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:21.970 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:21.970 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:21.971 [ 0]:0x2 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:21.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:21.971 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:22.230 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:22.230 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e16bd07-9e8a-42f7-8cf1-1b69321ce92b -a 10.0.0.2 -s 4420 -i 4 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:22.489 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:24.396 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:24.656 [ 0]:0x1 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d15314854a314ee9899d33a0dc6ccc0e 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d15314854a314ee9899d33a0dc6ccc0e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:24.656 [ 1]:0x2 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:24.656 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:24.915 [ 0]:0x2 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:24.915 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:25.175 [2024-12-10 00:02:09.451586] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:25.175 request: 00:20:25.175 { 00:20:25.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.175 "nsid": 2, 00:20:25.175 "host": "nqn.2016-06.io.spdk:host1", 00:20:25.175 "method": "nvmf_ns_remove_host", 00:20:25.175 "req_id": 1 00:20:25.175 } 00:20:25.175 Got JSON-RPC error response 00:20:25.175 response: 00:20:25.175 { 00:20:25.175 "code": -32602, 00:20:25.175 "message": "Invalid parameters" 00:20:25.175 } 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:25.175 [ 0]:0x2 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:25.175 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28511c8a572b4f669df348210f4943ea 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28511c8a572b4f669df348210f4943ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:25.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=376619 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 376619 /var/tmp/host.sock 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 376619 ']' 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:25.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.435 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 [2024-12-10 00:02:09.849145] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:25.435 [2024-12-10 00:02:09.849195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376619 ] 00:20:25.694 [2024-12-10 00:02:09.939276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.694 [2024-12-10 00:02:09.977681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.265 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.265 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:26.265 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:26.523 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:26.781 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 418500ff-c262-4ba9-abda-f893f238fd97 00:20:26.781 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:26.781 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 418500FFC2624BA9ABDAF893F238FD97 -i 00:20:27.040 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 171fc34e-ff35-4a3b-b764-d98c1b959324 00:20:27.040 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:27.040 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 171FC34EFF354A3BB764D98C1B959324 -i 00:20:27.040 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:27.298 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:27.556 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:27.556 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:27.814 nvme0n1 00:20:27.814 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:27.814 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:28.072 nvme1n2 00:20:28.072 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:28.073 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:28.073 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:28.073 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:28.073 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:28.331 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:28.331 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:28.331 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:28.331 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:28.589 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 418500ff-c262-4ba9-abda-f893f238fd97 == \4\1\8\5\0\0\f\f\-\c\2\6\2\-\4\b\a\9\-\a\b\d\a\-\f\8\9\3\f\2\3\8\f\d\9\7 ]] 00:20:28.589 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:28.589 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:20:28.590 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:28.848 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 171fc34e-ff35-4a3b-b764-d98c1b959324 == \1\7\1\f\c\3\4\e\-\f\f\3\5\-\4\a\3\b\-\b\7\6\4\-\d\9\8\c\1\b\9\5\9\3\2\4 ]] 00:20:28.848 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:28.848 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 418500ff-c262-4ba9-abda-f893f238fd97 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 418500FFC2624BA9ABDAF893F238FD97 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 418500FFC2624BA9ABDAF893F238FD97 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:29.106 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 418500FFC2624BA9ABDAF893F238FD97 00:20:29.364 [2024-12-10 00:02:13.631060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:20:29.364 [2024-12-10 00:02:13.631091] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:20:29.364 [2024-12-10 00:02:13.631102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.364 request: 00:20:29.364 { 00:20:29.364 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.364 "namespace": { 00:20:29.364 "bdev_name": "invalid", 00:20:29.364 "nsid": 1, 00:20:29.364 "nguid": "418500FFC2624BA9ABDAF893F238FD97", 00:20:29.364 "no_auto_visible": false, 00:20:29.364 "hide_metadata": false 00:20:29.364 }, 00:20:29.364 "method": "nvmf_subsystem_add_ns", 00:20:29.364 "req_id": 1 00:20:29.364 } 00:20:29.364 Got JSON-RPC error response 00:20:29.364 response: 00:20:29.364 { 00:20:29.364 "code": -32602, 00:20:29.364 "message": "Invalid parameters" 00:20:29.364 } 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 418500ff-c262-4ba9-abda-f893f238fd97 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:29.364 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 418500FFC2624BA9ABDAF893F238FD97 -i 00:20:29.623 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:20:31.522 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:20:31.522 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:20:31.522 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 376619 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 376619 ']' 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 376619 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376619 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376619' 00:20:31.781 killing process with pid 376619 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 376619 00:20:31.781 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 376619 00:20:32.040 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.299 rmmod nvme_tcp 00:20:32.299 rmmod nvme_fabrics 00:20:32.299 rmmod nvme_keyring 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 374399 ']' 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 374399 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 374399 ']' 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 374399 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.299 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374399 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374399' 00:20:32.559 killing process with pid 374399 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 374399 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 374399 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.559 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.101 00:20:35.101 real 0m28.449s 00:20:35.101 user 0m32.253s 00:20:35.101 sys 0m9.284s 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:35.101 ************************************ 00:20:35.101 END TEST nvmf_ns_masking 00:20:35.101 ************************************ 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.101 ************************************ 00:20:35.101 START TEST nvmf_nvme_cli 00:20:35.101 ************************************ 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:35.101 * Looking for test storage... 00:20:35.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.101 --rc genhtml_branch_coverage=1 00:20:35.101 --rc genhtml_function_coverage=1 00:20:35.101 --rc genhtml_legend=1 00:20:35.101 --rc geninfo_all_blocks=1 00:20:35.101 --rc geninfo_unexecuted_blocks=1 00:20:35.101 00:20:35.101 ' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.101 --rc genhtml_branch_coverage=1 00:20:35.101 --rc genhtml_function_coverage=1 00:20:35.101 --rc genhtml_legend=1 00:20:35.101 --rc geninfo_all_blocks=1 00:20:35.101 --rc geninfo_unexecuted_blocks=1 00:20:35.101 00:20:35.101 ' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.101 --rc genhtml_branch_coverage=1 00:20:35.101 --rc genhtml_function_coverage=1 00:20:35.101 --rc genhtml_legend=1 00:20:35.101 --rc geninfo_all_blocks=1 00:20:35.101 --rc geninfo_unexecuted_blocks=1 00:20:35.101 00:20:35.101 ' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:35.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.101 --rc genhtml_branch_coverage=1 00:20:35.101 --rc genhtml_function_coverage=1 00:20:35.101 --rc genhtml_legend=1 00:20:35.101 --rc geninfo_all_blocks=1 00:20:35.101 --rc geninfo_unexecuted_blocks=1 00:20:35.101 00:20:35.101 ' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.101 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.229 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:43.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:43.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:43.230 Found net devices under 0000:af:00.0: cvl_0_0 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:43.230 Found net devices under 0000:af:00.1: cvl_0_1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:20:43.230 00:20:43.230 --- 10.0.0.2 ping statistics --- 00:20:43.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.230 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:43.230 00:20:43.230 --- 10.0.0.1 ping statistics --- 00:20:43.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.230 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.230 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=381456 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 381456 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 381456 ']' 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.231 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 [2024-12-10 00:02:26.728452] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:43.231 [2024-12-10 00:02:26.728500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.231 [2024-12-10 00:02:26.825904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.231 [2024-12-10 00:02:26.869029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.231 [2024-12-10 00:02:26.869066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.231 [2024-12-10 00:02:26.869075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.231 [2024-12-10 00:02:26.869084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.231 [2024-12-10 00:02:26.869091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.231 [2024-12-10 00:02:26.870676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.231 [2024-12-10 00:02:26.870785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.231 [2024-12-10 00:02:26.870893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.231 [2024-12-10 00:02:26.870894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 [2024-12-10 00:02:27.614158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 Malloc0 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 Malloc1 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.231 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.497 [2024-12-10 00:02:27.714688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.497 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:20:43.497 00:20:43.497 Discovery Log Number of Records 2, Generation counter 2 00:20:43.497 =====Discovery Log Entry 0====== 00:20:43.497 trtype: tcp 00:20:43.497 adrfam: ipv4 00:20:43.497 subtype: current discovery subsystem 00:20:43.497 treq: not required 00:20:43.498 portid: 0 00:20:43.498 trsvcid: 4420 00:20:43.498 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:43.498 traddr: 10.0.0.2 00:20:43.498 eflags: explicit discovery connections, duplicate discovery information 00:20:43.498 sectype: none 00:20:43.498 =====Discovery Log Entry 1====== 00:20:43.498 trtype: tcp 00:20:43.498 adrfam: ipv4 00:20:43.498 subtype: nvme subsystem 00:20:43.498 treq: not required 00:20:43.498 portid: 0 00:20:43.498 trsvcid: 4420 00:20:43.498 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:43.498 traddr: 10.0.0.2 00:20:43.498 eflags: none 00:20:43.498 sectype: none 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:43.498 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:44.872 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:47.401 /dev/nvme0n2 ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:47.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.401 rmmod nvme_tcp 00:20:47.401 rmmod nvme_fabrics 00:20:47.401 rmmod nvme_keyring 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 381456 ']' 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 381456 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 381456 ']' 00:20:47.401 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 381456 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381456 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381456' 00:20:47.402 killing process with pid 381456 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 381456 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 381456 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.402 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.939 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.939 00:20:49.939 real 0m14.765s 00:20:49.939 user 0m21.816s 00:20:49.939 sys 0m6.443s 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:49.940 ************************************ 00:20:49.940 END TEST nvmf_nvme_cli 00:20:49.940 ************************************ 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.940 00:02:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.940 ************************************ 00:20:49.940 START TEST nvmf_vfio_user 00:20:49.940 ************************************ 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:49.940 * Looking for test storage... 00:20:49.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:49.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.940 --rc genhtml_branch_coverage=1 00:20:49.940 --rc genhtml_function_coverage=1 00:20:49.940 --rc genhtml_legend=1 00:20:49.940 --rc geninfo_all_blocks=1 00:20:49.940 --rc geninfo_unexecuted_blocks=1 00:20:49.940 00:20:49.940 ' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:49.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.940 --rc genhtml_branch_coverage=1 00:20:49.940 --rc genhtml_function_coverage=1 00:20:49.940 --rc genhtml_legend=1 00:20:49.940 --rc geninfo_all_blocks=1 00:20:49.940 --rc geninfo_unexecuted_blocks=1 00:20:49.940 00:20:49.940 ' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:49.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.940 --rc genhtml_branch_coverage=1 00:20:49.940 --rc genhtml_function_coverage=1 00:20:49.940 --rc genhtml_legend=1 00:20:49.940 --rc geninfo_all_blocks=1 00:20:49.940 --rc geninfo_unexecuted_blocks=1 00:20:49.940 00:20:49.940 ' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:49.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.940 --rc genhtml_branch_coverage=1 00:20:49.940 --rc genhtml_function_coverage=1 00:20:49.940 --rc genhtml_legend=1 00:20:49.940 --rc geninfo_all_blocks=1 00:20:49.940 --rc geninfo_unexecuted_blocks=1 00:20:49.940 00:20:49.940 ' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.940 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=382909 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 382909' 00:20:49.941 Process pid: 382909 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 382909 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 382909 ']' 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.941 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:49.941 [2024-12-10 00:02:34.288531] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:49.941 [2024-12-10 00:02:34.288580] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.941 [2024-12-10 00:02:34.375719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.200 [2024-12-10 00:02:34.413716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.200 [2024-12-10 00:02:34.413751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.200 [2024-12-10 00:02:34.413760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.200 [2024-12-10 00:02:34.413768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.200 [2024-12-10 00:02:34.413775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.200 [2024-12-10 00:02:34.415373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.200 [2024-12-10 00:02:34.415486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.200 [2024-12-10 00:02:34.415591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.200 [2024-12-10 00:02:34.415592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.764 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.764 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:50.764 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:51.697 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:51.955 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:51.955 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:51.955 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:51.955 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:51.955 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:52.213 Malloc1 00:20:52.213 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:52.477 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:52.740 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:52.740 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:52.740 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:52.740 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:52.998 Malloc2 00:20:52.998 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:53.256 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:53.515 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:53.775 [2024-12-10 00:02:37.997068] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:20:53.775 [2024-12-10 00:02:37.997106] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383541 ] 00:20:53.775 [2024-12-10 00:02:38.039154] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:53.775 [2024-12-10 00:02:38.044532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:53.775 [2024-12-10 00:02:38.044557] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3acb877000 00:20:53.775 [2024-12-10 00:02:38.045532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.046531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.047541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.048544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.049549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.050557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.051563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:53.775 [2024-12-10 00:02:38.052572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:53.776 [2024-12-10 00:02:38.053573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:53.776 [2024-12-10 00:02:38.053584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3acb86c000 00:20:53.776 [2024-12-10 00:02:38.054478] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:53.776 [2024-12-10 00:02:38.063779] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:53.776 [2024-12-10 00:02:38.063815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:53.776 [2024-12-10 00:02:38.069682] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:53.776 [2024-12-10 00:02:38.069720] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:53.776 [2024-12-10 00:02:38.069792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:53.776 [2024-12-10 00:02:38.069811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:53.776 [2024-12-10 00:02:38.069818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:53.776 [2024-12-10 00:02:38.070678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:53.776 [2024-12-10 00:02:38.070690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:53.776 [2024-12-10 00:02:38.070699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:53.776 [2024-12-10 00:02:38.071685] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:53.776 [2024-12-10 00:02:38.071696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:53.776 [2024-12-10 00:02:38.071706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.072695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:53.776 [2024-12-10 00:02:38.072706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.073699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:53.776 [2024-12-10 00:02:38.073709] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:53.776 [2024-12-10 00:02:38.073718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.073727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.073836] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:53.776 [2024-12-10 00:02:38.073843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.073849] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:53.776 [2024-12-10 00:02:38.074719] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:53.776 [2024-12-10 00:02:38.075710] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:53.776 [2024-12-10 00:02:38.076717] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:53.776 [2024-12-10 00:02:38.077715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:53.776 [2024-12-10 00:02:38.077788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:53.776 [2024-12-10 00:02:38.078733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:53.776 [2024-12-10 00:02:38.078742] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:53.776 [2024-12-10 00:02:38.078749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.078768] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:53.776 [2024-12-10 00:02:38.078781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.078802] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:53.776 [2024-12-10 00:02:38.078808] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:53.776 [2024-12-10 00:02:38.078813] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.776 [2024-12-10 00:02:38.078832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:53.776 [2024-12-10 00:02:38.078882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:53.776 [2024-12-10 00:02:38.078895] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:53.776 [2024-12-10 00:02:38.078902] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:53.776 [2024-12-10 00:02:38.078907] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:53.776 [2024-12-10 00:02:38.078914] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:53.776 [2024-12-10 00:02:38.078920] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:53.776 [2024-12-10 00:02:38.078926] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:53.776 [2024-12-10 00:02:38.078934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.078945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.078956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:53.776 [2024-12-10 00:02:38.078971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:53.776 [2024-12-10 00:02:38.078984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.776 [2024-12-10 00:02:38.078993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.776 [2024-12-10 00:02:38.079002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.776 [2024-12-10 00:02:38.079012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.776 [2024-12-10 00:02:38.079018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:53.776 [2024-12-10 00:02:38.079048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:53.776 [2024-12-10 00:02:38.079055] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:53.776 [2024-12-10 00:02:38.079062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:53.776 [2024-12-10 00:02:38.079102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:53.776 [2024-12-10 00:02:38.079151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079169] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:53.776 [2024-12-10 00:02:38.079175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:53.776 [2024-12-10 00:02:38.079179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.776 [2024-12-10 00:02:38.079186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:53.776 [2024-12-10 00:02:38.079200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:53.776 [2024-12-10 00:02:38.079212] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:53.776 [2024-12-10 00:02:38.079223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:53.776 [2024-12-10 00:02:38.079241] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:53.777 [2024-12-10 00:02:38.079246] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:53.777 [2024-12-10 00:02:38.079251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.777 [2024-12-10 00:02:38.079257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079315] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:53.777 [2024-12-10 00:02:38.079321] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:53.777 [2024-12-10 00:02:38.079326] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.777 [2024-12-10 00:02:38.079333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079402] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:53.777 [2024-12-10 00:02:38.079408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:53.777 [2024-12-10 00:02:38.079414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:53.777 [2024-12-10 00:02:38.079434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079530] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:53.777 [2024-12-10 00:02:38.079536] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:53.777 [2024-12-10 00:02:38.079541] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:53.777 [2024-12-10 00:02:38.079546] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:53.777 [2024-12-10 00:02:38.079552] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:53.777 [2024-12-10 00:02:38.079560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:53.777 [2024-12-10 00:02:38.079568] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:53.777 [2024-12-10 00:02:38.079574] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:53.777 [2024-12-10 00:02:38.079578] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.777 [2024-12-10 00:02:38.079586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079595] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:53.777 [2024-12-10 00:02:38.079601] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:53.777 [2024-12-10 00:02:38.079606] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.777 [2024-12-10 00:02:38.079614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:53.777 [2024-12-10 00:02:38.079628] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:53.777 [2024-12-10 00:02:38.079632] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:53.777 [2024-12-10 00:02:38.079639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:53.777 [2024-12-10 00:02:38.079648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:53.777 [2024-12-10 00:02:38.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:53.777 ===================================================== 00:20:53.777 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:53.777 ===================================================== 00:20:53.777 Controller Capabilities/Features 00:20:53.777 ================================ 00:20:53.777 Vendor ID: 4e58 00:20:53.777 Subsystem Vendor ID: 4e58 00:20:53.777 Serial Number: SPDK1 00:20:53.777 Model Number: SPDK bdev Controller 00:20:53.777 Firmware Version: 25.01 00:20:53.777 Recommended Arb Burst: 6 00:20:53.777 IEEE OUI Identifier: 8d 6b 50 00:20:53.777 Multi-path I/O 00:20:53.777 May have multiple subsystem ports: Yes 00:20:53.777 May have multiple controllers: Yes 00:20:53.777 Associated with SR-IOV VF: No 00:20:53.777 Max Data Transfer Size: 131072 00:20:53.777 Max Number of Namespaces: 32 00:20:53.777 Max Number of I/O Queues: 127 00:20:53.777 NVMe Specification Version (VS): 1.3 00:20:53.777 NVMe Specification Version (Identify): 1.3 00:20:53.777 Maximum Queue Entries: 256 00:20:53.777 Contiguous Queues Required: Yes 00:20:53.777 Arbitration Mechanisms Supported 00:20:53.777 Weighted Round Robin: Not Supported 00:20:53.777 Vendor Specific: Not Supported 00:20:53.777 Reset Timeout: 15000 ms 00:20:53.777 Doorbell Stride: 4 bytes 00:20:53.777 NVM Subsystem Reset: Not Supported 00:20:53.777 Command Sets Supported 00:20:53.777 NVM Command Set: Supported 00:20:53.777 Boot Partition: Not Supported 00:20:53.777 Memory Page Size Minimum: 4096 bytes 00:20:53.777 Memory Page Size Maximum: 4096 bytes 00:20:53.777 Persistent Memory Region: Not Supported 00:20:53.777 Optional Asynchronous Events Supported 00:20:53.777 Namespace Attribute Notices: Supported 00:20:53.777 Firmware Activation Notices: Not Supported 00:20:53.777 ANA Change Notices: Not Supported 00:20:53.777 PLE Aggregate Log Change Notices: Not Supported 00:20:53.777 LBA Status Info Alert Notices: Not Supported 00:20:53.777 EGE Aggregate Log Change Notices: Not Supported 00:20:53.777 Normal NVM Subsystem Shutdown event: Not Supported 00:20:53.777 Zone Descriptor Change Notices: Not Supported 00:20:53.777 Discovery Log Change Notices: Not Supported 00:20:53.777 Controller Attributes 00:20:53.777 128-bit Host Identifier: Supported 00:20:53.777 Non-Operational Permissive Mode: Not Supported 00:20:53.777 NVM Sets: Not Supported 00:20:53.777 Read Recovery Levels: Not Supported 00:20:53.777 Endurance Groups: Not Supported 00:20:53.777 Predictable Latency Mode: Not Supported 00:20:53.777 Traffic Based Keep ALive: Not Supported 00:20:53.777 Namespace Granularity: Not Supported 00:20:53.777 SQ Associations: Not Supported 00:20:53.777 UUID List: Not Supported 00:20:53.777 Multi-Domain Subsystem: Not Supported 00:20:53.777 Fixed Capacity Management: Not Supported 00:20:53.777 Variable Capacity Management: Not Supported 00:20:53.777 Delete Endurance Group: Not Supported 00:20:53.777 Delete NVM Set: Not Supported 00:20:53.777 Extended LBA Formats Supported: Not Supported 00:20:53.777 Flexible Data Placement Supported: Not Supported 00:20:53.777 00:20:53.778 Controller Memory Buffer Support 00:20:53.778 ================================ 00:20:53.778 Supported: No 00:20:53.778 00:20:53.778 Persistent Memory Region Support 00:20:53.778 ================================ 00:20:53.778 Supported: No 00:20:53.778 00:20:53.778 Admin Command Set Attributes 00:20:53.778 ============================ 00:20:53.778 Security Send/Receive: Not Supported 00:20:53.778 Format NVM: Not Supported 00:20:53.778 Firmware Activate/Download: Not Supported 00:20:53.778 Namespace Management: Not Supported 00:20:53.778 Device Self-Test: Not Supported 00:20:53.778 Directives: Not Supported 00:20:53.778 NVMe-MI: Not Supported 00:20:53.778 Virtualization Management: Not Supported 00:20:53.778 Doorbell Buffer Config: Not Supported 00:20:53.778 Get LBA Status Capability: Not Supported 00:20:53.778 Command & Feature Lockdown Capability: Not Supported 00:20:53.778 Abort Command Limit: 4 00:20:53.778 Async Event Request Limit: 4 00:20:53.778 Number of Firmware Slots: N/A 00:20:53.778 Firmware Slot 1 Read-Only: N/A 00:20:53.778 Firmware Activation Without Reset: N/A 00:20:53.778 Multiple Update Detection Support: N/A 00:20:53.778 Firmware Update Granularity: No Information Provided 00:20:53.778 Per-Namespace SMART Log: No 00:20:53.778 Asymmetric Namespace Access Log Page: Not Supported 00:20:53.778 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:53.778 Command Effects Log Page: Supported 00:20:53.778 Get Log Page Extended Data: Supported 00:20:53.778 Telemetry Log Pages: Not Supported 00:20:53.778 Persistent Event Log Pages: Not Supported 00:20:53.778 Supported Log Pages Log Page: May Support 00:20:53.778 Commands Supported & Effects Log Page: Not Supported 00:20:53.778 Feature Identifiers & Effects Log Page:May Support 00:20:53.778 NVMe-MI Commands & Effects Log Page: May Support 00:20:53.778 Data Area 4 for Telemetry Log: Not Supported 00:20:53.778 Error Log Page Entries Supported: 128 00:20:53.778 Keep Alive: Supported 00:20:53.778 Keep Alive Granularity: 10000 ms 00:20:53.778 00:20:53.778 NVM Command Set Attributes 00:20:53.778 ========================== 00:20:53.778 Submission Queue Entry Size 00:20:53.778 Max: 64 00:20:53.778 Min: 64 00:20:53.778 Completion Queue Entry Size 00:20:53.778 Max: 16 00:20:53.778 Min: 16 00:20:53.778 Number of Namespaces: 32 00:20:53.778 Compare Command: Supported 00:20:53.778 Write Uncorrectable Command: Not Supported 00:20:53.778 Dataset Management Command: Supported 00:20:53.778 Write Zeroes Command: Supported 00:20:53.778 Set Features Save Field: Not Supported 00:20:53.778 Reservations: Not Supported 00:20:53.778 Timestamp: Not Supported 00:20:53.778 Copy: Supported 00:20:53.778 Volatile Write Cache: Present 00:20:53.778 Atomic Write Unit (Normal): 1 00:20:53.778 Atomic Write Unit (PFail): 1 00:20:53.778 Atomic Compare & Write Unit: 1 00:20:53.778 Fused Compare & Write: Supported 00:20:53.778 Scatter-Gather List 00:20:53.778 SGL Command Set: Supported (Dword aligned) 00:20:53.778 SGL Keyed: Not Supported 00:20:53.778 SGL Bit Bucket Descriptor: Not Supported 00:20:53.778 SGL Metadata Pointer: Not Supported 00:20:53.778 Oversized SGL: Not Supported 00:20:53.778 SGL Metadata Address: Not Supported 00:20:53.778 SGL Offset: Not Supported 00:20:53.778 Transport SGL Data Block: Not Supported 00:20:53.778 Replay Protected Memory Block: Not Supported 00:20:53.778 00:20:53.778 Firmware Slot Information 00:20:53.778 ========================= 00:20:53.778 Active slot: 1 00:20:53.778 Slot 1 Firmware Revision: 25.01 00:20:53.778 00:20:53.778 00:20:53.778 Commands Supported and Effects 00:20:53.778 ============================== 00:20:53.778 Admin Commands 00:20:53.778 -------------- 00:20:53.778 Get Log Page (02h): Supported 00:20:53.778 Identify (06h): Supported 00:20:53.778 Abort (08h): Supported 00:20:53.778 Set Features (09h): Supported 00:20:53.778 Get Features (0Ah): Supported 00:20:53.778 Asynchronous Event Request (0Ch): Supported 00:20:53.778 Keep Alive (18h): Supported 00:20:53.778 I/O Commands 00:20:53.778 ------------ 00:20:53.778 Flush (00h): Supported LBA-Change 00:20:53.778 Write (01h): Supported LBA-Change 00:20:53.778 Read (02h): Supported 00:20:53.778 Compare (05h): Supported 00:20:53.778 Write Zeroes (08h): Supported LBA-Change 00:20:53.778 Dataset Management (09h): Supported LBA-Change 00:20:53.778 Copy (19h): Supported LBA-Change 00:20:53.778 00:20:53.778 Error Log 00:20:53.778 ========= 00:20:53.778 00:20:53.778 Arbitration 00:20:53.778 =========== 00:20:53.778 Arbitration Burst: 1 00:20:53.778 00:20:53.778 Power Management 00:20:53.778 ================ 00:20:53.778 Number of Power States: 1 00:20:53.778 Current Power State: Power State #0 00:20:53.778 Power State #0: 00:20:53.778 Max Power: 0.00 W 00:20:53.778 Non-Operational State: Operational 00:20:53.778 Entry Latency: Not Reported 00:20:53.778 Exit Latency: Not Reported 00:20:53.778 Relative Read Throughput: 0 00:20:53.778 Relative Read Latency: 0 00:20:53.778 Relative Write Throughput: 0 00:20:53.778 Relative Write Latency: 0 00:20:53.778 Idle Power: Not Reported 00:20:53.778 Active Power: Not Reported 00:20:53.778 Non-Operational Permissive Mode: Not Supported 00:20:53.778 00:20:53.778 Health Information 00:20:53.778 ================== 00:20:53.778 Critical Warnings: 00:20:53.778 Available Spare Space: OK 00:20:53.778 Temperature: OK 00:20:53.778 Device Reliability: OK 00:20:53.778 Read Only: No 00:20:53.778 Volatile Memory Backup: OK 00:20:53.778 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:53.778 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:53.778 Available Spare: 0% 00:20:53.778 Available Sp[2024-12-10 00:02:38.079771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:53.778 [2024-12-10 00:02:38.079786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:53.778 [2024-12-10 00:02:38.079815] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:53.778 [2024-12-10 00:02:38.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.778 [2024-12-10 00:02:38.079842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.778 [2024-12-10 00:02:38.079849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.778 [2024-12-10 00:02:38.079857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.778 [2024-12-10 00:02:38.083833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:53.778 [2024-12-10 00:02:38.083847] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:53.778 [2024-12-10 00:02:38.084761] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:53.778 [2024-12-10 00:02:38.084813] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:53.778 [2024-12-10 00:02:38.084821] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:53.778 [2024-12-10 00:02:38.085772] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:53.778 [2024-12-10 00:02:38.085785] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:53.778 [2024-12-10 00:02:38.085840] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:53.778 [2024-12-10 00:02:38.086802] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:53.778 are Threshold: 0% 00:20:53.778 Life Percentage Used: 0% 00:20:53.778 Data Units Read: 0 00:20:53.778 Data Units Written: 0 00:20:53.778 Host Read Commands: 0 00:20:53.778 Host Write Commands: 0 00:20:53.778 Controller Busy Time: 0 minutes 00:20:53.778 Power Cycles: 0 00:20:53.778 Power On Hours: 0 hours 00:20:53.778 Unsafe Shutdowns: 0 00:20:53.778 Unrecoverable Media Errors: 0 00:20:53.778 Lifetime Error Log Entries: 0 00:20:53.778 Warning Temperature Time: 0 minutes 00:20:53.778 Critical Temperature Time: 0 minutes 00:20:53.778 00:20:53.779 Number of Queues 00:20:53.779 ================ 00:20:53.779 Number of I/O Submission Queues: 127 00:20:53.779 Number of I/O Completion Queues: 127 00:20:53.779 00:20:53.779 Active Namespaces 00:20:53.779 ================= 00:20:53.779 Namespace ID:1 00:20:53.779 Error Recovery Timeout: Unlimited 00:20:53.779 Command Set Identifier: NVM (00h) 00:20:53.779 Deallocate: Supported 00:20:53.779 Deallocated/Unwritten Error: Not Supported 00:20:53.779 Deallocated Read Value: Unknown 00:20:53.779 Deallocate in Write Zeroes: Not Supported 00:20:53.779 Deallocated Guard Field: 0xFFFF 00:20:53.779 Flush: Supported 00:20:53.779 Reservation: Supported 00:20:53.779 Namespace Sharing Capabilities: Multiple Controllers 00:20:53.779 Size (in LBAs): 131072 (0GiB) 00:20:53.779 Capacity (in LBAs): 131072 (0GiB) 00:20:53.779 Utilization (in LBAs): 131072 (0GiB) 00:20:53.779 NGUID: DA819657786D4C1398BE2451B7D0B8FD 00:20:53.779 UUID: da819657-786d-4c13-98be-2451b7d0b8fd 00:20:53.779 Thin Provisioning: Not Supported 00:20:53.779 Per-NS Atomic Units: Yes 00:20:53.779 Atomic Boundary Size (Normal): 0 00:20:53.779 Atomic Boundary Size (PFail): 0 00:20:53.779 Atomic Boundary Offset: 0 00:20:53.779 Maximum Single Source Range Length: 65535 00:20:53.779 Maximum Copy Length: 65535 00:20:53.779 Maximum Source Range Count: 1 00:20:53.779 NGUID/EUI64 Never Reused: No 00:20:53.779 Namespace Write Protected: No 00:20:53.779 Number of LBA Formats: 1 00:20:53.779 Current LBA Format: LBA Format #00 00:20:53.779 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:53.779 00:20:53.779 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:54.037 [2024-12-10 00:02:38.319684] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:59.301 Initializing NVMe Controllers 00:20:59.301 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:59.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:59.301 Initialization complete. Launching workers. 00:20:59.301 ======================================================== 00:20:59.301 Latency(us) 00:20:59.301 Device Information : IOPS MiB/s Average min max 00:20:59.301 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39976.69 156.16 3202.08 934.18 6658.72 00:20:59.301 ======================================================== 00:20:59.301 Total : 39976.69 156.16 3202.08 934.18 6658.72 00:20:59.301 00:20:59.301 [2024-12-10 00:02:43.341024] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:59.301 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:59.301 [2024-12-10 00:02:43.576121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:04.563 Initializing NVMe Controllers 00:21:04.563 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:04.563 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:04.563 Initialization complete. Launching workers. 00:21:04.563 ======================================================== 00:21:04.563 Latency(us) 00:21:04.564 Device Information : IOPS MiB/s Average min max 00:21:04.564 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8021.19 4986.13 15964.41 00:21:04.564 ======================================================== 00:21:04.564 Total : 15974.40 62.40 8021.19 4986.13 15964.41 00:21:04.564 00:21:04.564 [2024-12-10 00:02:48.611415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:04.564 00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:04.564 [2024-12-10 00:02:48.846469] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:09.825 [2024-12-10 00:02:53.915131] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:09.825 Initializing NVMe Controllers 00:21:09.825 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:09.825 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:09.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:21:09.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:21:09.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:21:09.825 Initialization complete. Launching workers. 00:21:09.825 Starting thread on core 2 00:21:09.825 Starting thread on core 3 00:21:09.825 Starting thread on core 1 00:21:09.825 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:21:09.825 [2024-12-10 00:02:54.225199] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:13.114 [2024-12-10 00:02:57.284022] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:13.114 Initializing NVMe Controllers 00:21:13.114 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:13.114 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:13.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:21:13.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:21:13.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:21:13.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:21:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:13.114 Initialization complete. Launching workers. 00:21:13.114 Starting thread on core 1 with urgent priority queue 00:21:13.114 Starting thread on core 2 with urgent priority queue 00:21:13.114 Starting thread on core 3 with urgent priority queue 00:21:13.114 Starting thread on core 0 with urgent priority queue 00:21:13.114 SPDK bdev Controller (SPDK1 ) core 0: 7093.33 IO/s 14.10 secs/100000 ios 00:21:13.114 SPDK bdev Controller (SPDK1 ) core 1: 10727.33 IO/s 9.32 secs/100000 ios 00:21:13.114 SPDK bdev Controller (SPDK1 ) core 2: 8521.67 IO/s 11.73 secs/100000 ios 00:21:13.114 SPDK bdev Controller (SPDK1 ) core 3: 8935.67 IO/s 11.19 secs/100000 ios 00:21:13.114 ======================================================== 00:21:13.114 00:21:13.114 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:13.114 [2024-12-10 00:02:57.584232] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:13.372 Initializing NVMe Controllers 00:21:13.372 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:13.372 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:13.372 Namespace ID: 1 size: 0GB 00:21:13.372 Initialization complete. 00:21:13.372 INFO: using host memory buffer for IO 00:21:13.372 Hello world! 00:21:13.372 [2024-12-10 00:02:57.618665] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:13.372 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:13.630 [2024-12-10 00:02:57.913172] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:14.563 Initializing NVMe Controllers 00:21:14.563 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:14.563 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:14.563 Initialization complete. Launching workers. 00:21:14.563 submit (in ns) avg, min, max = 4792.7, 3068.0, 4000379.2 00:21:14.563 complete (in ns) avg, min, max = 21267.8, 1670.4, 4176416.8 00:21:14.563 00:21:14.563 Submit histogram 00:21:14.563 ================ 00:21:14.563 Range in us Cumulative Count 00:21:14.563 3.059 - 3.072: 0.0061% ( 1) 00:21:14.563 3.072 - 3.085: 0.0122% ( 1) 00:21:14.563 3.085 - 3.098: 0.0305% ( 3) 00:21:14.563 3.098 - 3.110: 0.1037% ( 12) 00:21:14.563 3.110 - 3.123: 0.1526% ( 8) 00:21:14.563 3.123 - 3.136: 0.4393% ( 47) 00:21:14.563 3.136 - 3.149: 1.2265% ( 129) 00:21:14.563 3.149 - 3.162: 2.4774% ( 205) 00:21:14.563 3.162 - 3.174: 4.3874% ( 313) 00:21:14.563 3.174 - 3.187: 7.4933% ( 509) 00:21:14.563 3.187 - 3.200: 11.3498% ( 632) 00:21:14.563 3.200 - 3.213: 15.9934% ( 761) 00:21:14.563 3.213 - 3.226: 21.2534% ( 862) 00:21:14.563 3.226 - 3.238: 26.9160% ( 928) 00:21:14.563 3.238 - 3.251: 32.3224% ( 886) 00:21:14.563 3.251 - 3.264: 38.0644% ( 941) 00:21:14.563 3.264 - 3.277: 45.2038% ( 1170) 00:21:14.563 3.277 - 3.302: 55.5406% ( 1694) 00:21:14.563 3.302 - 3.328: 63.4367% ( 1294) 00:21:14.563 3.328 - 3.354: 70.0879% ( 1090) 00:21:14.563 3.354 - 3.379: 76.4767% ( 1047) 00:21:14.563 3.379 - 3.405: 83.5001% ( 1151) 00:21:14.563 3.405 - 3.430: 87.1735% ( 602) 00:21:14.563 3.430 - 3.456: 88.1499% ( 160) 00:21:14.563 3.456 - 3.482: 88.7113% ( 92) 00:21:14.563 3.482 - 3.507: 89.5777% ( 142) 00:21:14.563 3.507 - 3.533: 90.8409% ( 207) 00:21:14.563 3.533 - 3.558: 92.3725% ( 251) 00:21:14.563 3.558 - 3.584: 94.0932% ( 282) 00:21:14.563 3.584 - 3.610: 95.6371% ( 253) 00:21:14.563 3.610 - 3.635: 96.9002% ( 207) 00:21:14.563 3.635 - 3.661: 97.9863% ( 178) 00:21:14.563 3.661 - 3.686: 98.7247% ( 121) 00:21:14.563 3.686 - 3.712: 99.1701% ( 73) 00:21:14.563 3.712 - 3.738: 99.3776% ( 34) 00:21:14.563 3.738 - 3.763: 99.5240% ( 24) 00:21:14.563 3.763 - 3.789: 99.6034% ( 13) 00:21:14.563 3.789 - 3.814: 99.6461% ( 7) 00:21:14.563 3.814 - 3.840: 99.6522% ( 1) 00:21:14.563 3.840 - 3.866: 99.6766% ( 4) 00:21:14.563 3.866 - 3.891: 99.6827% ( 1) 00:21:14.563 3.917 - 3.942: 99.6888% ( 1) 00:21:14.563 4.173 - 4.198: 99.6949% ( 1) 00:21:14.563 5.862 - 5.888: 99.7010% ( 1) 00:21:14.563 5.939 - 5.965: 99.7071% ( 1) 00:21:14.563 5.990 - 6.016: 99.7132% ( 1) 00:21:14.563 6.221 - 6.246: 99.7193% ( 1) 00:21:14.563 6.554 - 6.605: 99.7254% ( 1) 00:21:14.563 6.605 - 6.656: 99.7315% ( 1) 00:21:14.563 6.656 - 6.707: 99.7376% ( 1) 00:21:14.563 6.861 - 6.912: 99.7437% ( 1) 00:21:14.563 7.014 - 7.066: 99.7559% ( 2) 00:21:14.563 7.066 - 7.117: 99.7620% ( 1) 00:21:14.563 7.168 - 7.219: 99.7681% ( 1) 00:21:14.563 7.270 - 7.322: 99.7803% ( 2) 00:21:14.563 7.322 - 7.373: 99.7864% ( 1) 00:21:14.563 7.373 - 7.424: 99.7925% ( 1) 00:21:14.563 7.424 - 7.475: 99.8047% ( 2) 00:21:14.563 7.475 - 7.526: 99.8108% ( 1) 00:21:14.563 7.526 - 7.578: 99.8230% ( 2) 00:21:14.563 7.578 - 7.629: 99.8291% ( 1) 00:21:14.563 7.629 - 7.680: 99.8352% ( 1) 00:21:14.563 7.731 - 7.782: 99.8474% ( 2) 00:21:14.563 7.782 - 7.834: 99.8536% ( 1) 00:21:14.563 7.834 - 7.885: 99.8597% ( 1) 00:21:14.563 7.936 - 7.987: 99.8719% ( 2) 00:21:14.563 7.987 - 8.038: 99.8780% ( 1) 00:21:14.563 8.038 - 8.090: 99.8841% ( 1) 00:21:14.563 8.192 - 8.243: 99.8902% ( 1) 00:21:14.563 8.243 - 8.294: 99.8963% ( 1) 00:21:14.563 8.397 - 8.448: 99.9024% ( 1) 00:21:14.563 8.499 - 8.550: 99.9085% ( 1) 00:21:14.563 8.653 - 8.704: 99.9207% ( 2) 00:21:14.563 8.755 - 8.806: 99.9329% ( 2) 00:21:14.563 9.114 - 9.165: 99.9390% ( 1) 00:21:14.563 9.165 - 9.216: 99.9451% ( 1) 00:21:14.563 9.318 - 9.370: 99.9512% ( 1) 00:21:14.563 9.523 - 9.574: 99.9573% ( 1) 00:21:14.563 14.746 - 14.848: 99.9634% ( 1) 00:21:14.563 [2024-12-10 00:02:58.932313] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:14.563 3984.589 - 4010.803: 100.0000% ( 6) 00:21:14.563 00:21:14.563 Complete histogram 00:21:14.563 ================== 00:21:14.563 Range in us Cumulative Count 00:21:14.563 1.664 - 1.677: 0.0183% ( 3) 00:21:14.563 1.677 - 1.690: 0.0732% ( 9) 00:21:14.563 1.690 - 1.702: 0.0976% ( 4) 00:21:14.563 1.702 - 1.715: 0.3722% ( 45) 00:21:14.563 1.715 - 1.728: 24.0359% ( 3878) 00:21:14.564 1.728 - 1.741: 74.6339% ( 8292) 00:21:14.564 1.741 - 1.754: 82.7984% ( 1338) 00:21:14.564 1.754 - 1.766: 85.5016% ( 443) 00:21:14.564 1.766 - 1.779: 88.6441% ( 515) 00:21:14.564 1.779 - 1.792: 93.9834% ( 875) 00:21:14.564 1.792 - 1.805: 96.3449% ( 387) 00:21:14.564 1.805 - 1.818: 97.6324% ( 211) 00:21:14.564 1.818 - 1.830: 97.9863% ( 58) 00:21:14.564 1.830 - 1.843: 98.0596% ( 12) 00:21:14.564 1.843 - 1.856: 98.0840% ( 4) 00:21:14.564 1.856 - 1.869: 98.1511% ( 11) 00:21:14.564 1.869 - 1.882: 98.2060% ( 9) 00:21:14.564 1.882 - 1.894: 98.3586% ( 25) 00:21:14.564 1.894 - 1.907: 98.6026% ( 40) 00:21:14.564 1.907 - 1.920: 98.9016% ( 49) 00:21:14.564 1.920 - 1.933: 99.0603% ( 26) 00:21:14.564 1.933 - 1.946: 99.1457% ( 14) 00:21:14.564 1.946 - 1.958: 99.1884% ( 7) 00:21:14.564 1.958 - 1.971: 99.2189% ( 5) 00:21:14.564 1.971 - 1.984: 99.2372% ( 3) 00:21:14.564 1.984 - 1.997: 99.2433% ( 1) 00:21:14.564 2.010 - 2.022: 99.2495% ( 1) 00:21:14.564 2.048 - 2.061: 99.2556% ( 1) 00:21:14.564 2.138 - 2.150: 99.2678% ( 2) 00:21:14.564 2.163 - 2.176: 99.2739% ( 1) 00:21:14.564 2.214 - 2.227: 99.2800% ( 1) 00:21:14.564 2.253 - 2.266: 99.2861% ( 1) 00:21:14.564 2.266 - 2.278: 99.2922% ( 1) 00:21:14.564 2.278 - 2.291: 99.2983% ( 1) 00:21:14.564 2.291 - 2.304: 99.3044% ( 1) 00:21:14.564 2.355 - 2.368: 99.3105% ( 1) 00:21:14.564 4.122 - 4.147: 99.3166% ( 1) 00:21:14.564 4.198 - 4.224: 99.3227% ( 1) 00:21:14.564 4.582 - 4.608: 99.3288% ( 1) 00:21:14.564 4.736 - 4.762: 99.3410% ( 2) 00:21:14.564 4.864 - 4.890: 99.3532% ( 2) 00:21:14.564 4.992 - 5.018: 99.3593% ( 1) 00:21:14.564 5.197 - 5.222: 99.3654% ( 1) 00:21:14.564 5.299 - 5.325: 99.3715% ( 1) 00:21:14.564 5.350 - 5.376: 99.3776% ( 1) 00:21:14.564 5.504 - 5.530: 99.3837% ( 1) 00:21:14.564 5.581 - 5.606: 99.3898% ( 1) 00:21:14.564 5.709 - 5.734: 99.3959% ( 1) 00:21:14.564 5.734 - 5.760: 99.4020% ( 1) 00:21:14.564 5.837 - 5.862: 99.4203% ( 3) 00:21:14.564 5.862 - 5.888: 99.4264% ( 1) 00:21:14.564 6.170 - 6.195: 99.4325% ( 1) 00:21:14.564 6.221 - 6.246: 99.4386% ( 1) 00:21:14.564 6.374 - 6.400: 99.4447% ( 1) 00:21:14.564 6.451 - 6.477: 99.4508% ( 1) 00:21:14.564 6.554 - 6.605: 99.4569% ( 1) 00:21:14.564 6.605 - 6.656: 99.4630% ( 1) 00:21:14.564 6.758 - 6.810: 99.4691% ( 1) 00:21:14.564 6.963 - 7.014: 99.4752% ( 1) 00:21:14.564 7.066 - 7.117: 99.4813% ( 1) 00:21:14.564 7.219 - 7.270: 99.4935% ( 2) 00:21:14.564 7.270 - 7.322: 99.4996% ( 1) 00:21:14.564 8.858 - 8.909: 99.5057% ( 1) 00:21:14.564 10.906 - 10.957: 99.5118% ( 1) 00:21:14.564 3984.589 - 4010.803: 99.9878% ( 78) 00:21:14.564 4010.803 - 4037.018: 99.9939% ( 1) 00:21:14.564 4168.090 - 4194.304: 100.0000% ( 1) 00:21:14.564 00:21:14.564 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:21:14.564 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:14.564 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:21:14.564 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:21:14.564 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:14.822 [ 00:21:14.822 { 00:21:14.822 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:14.822 "subtype": "Discovery", 00:21:14.822 "listen_addresses": [], 00:21:14.822 "allow_any_host": true, 00:21:14.822 "hosts": [] 00:21:14.822 }, 00:21:14.822 { 00:21:14.822 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:14.822 "subtype": "NVMe", 00:21:14.822 "listen_addresses": [ 00:21:14.822 { 00:21:14.822 "trtype": "VFIOUSER", 00:21:14.822 "adrfam": "IPv4", 00:21:14.822 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:14.822 "trsvcid": "0" 00:21:14.822 } 00:21:14.822 ], 00:21:14.822 "allow_any_host": true, 00:21:14.822 "hosts": [], 00:21:14.822 "serial_number": "SPDK1", 00:21:14.822 "model_number": "SPDK bdev Controller", 00:21:14.822 "max_namespaces": 32, 00:21:14.822 "min_cntlid": 1, 00:21:14.822 "max_cntlid": 65519, 00:21:14.822 "namespaces": [ 00:21:14.822 { 00:21:14.822 "nsid": 1, 00:21:14.822 "bdev_name": "Malloc1", 00:21:14.822 "name": "Malloc1", 00:21:14.822 "nguid": "DA819657786D4C1398BE2451B7D0B8FD", 00:21:14.822 "uuid": "da819657-786d-4c13-98be-2451b7d0b8fd" 00:21:14.822 } 00:21:14.822 ] 00:21:14.822 }, 00:21:14.822 { 00:21:14.822 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:14.822 "subtype": "NVMe", 00:21:14.822 "listen_addresses": [ 00:21:14.822 { 00:21:14.822 "trtype": "VFIOUSER", 00:21:14.822 "adrfam": "IPv4", 00:21:14.822 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:14.822 "trsvcid": "0" 00:21:14.822 } 00:21:14.822 ], 00:21:14.822 "allow_any_host": true, 00:21:14.822 "hosts": [], 00:21:14.822 "serial_number": "SPDK2", 00:21:14.822 "model_number": "SPDK bdev Controller", 00:21:14.822 "max_namespaces": 32, 00:21:14.822 "min_cntlid": 1, 00:21:14.822 "max_cntlid": 65519, 00:21:14.822 "namespaces": [ 00:21:14.822 { 00:21:14.822 "nsid": 1, 00:21:14.822 "bdev_name": "Malloc2", 00:21:14.822 "name": "Malloc2", 00:21:14.822 "nguid": "082AE0DA39F9427BA9581B46F1AD48A9", 00:21:14.822 "uuid": "082ae0da-39f9-427b-a958-1b46f1ad48a9" 00:21:14.822 } 00:21:14.822 ] 00:21:14.822 } 00:21:14.822 ] 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=387154 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:21:14.822 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:15.080 [2024-12-10 00:02:59.359233] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:15.080 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:15.080 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:15.080 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:21:15.080 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:15.080 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:21:15.338 Malloc3 00:21:15.338 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:21:15.338 [2024-12-10 00:02:59.794331] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:15.596 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:15.596 Asynchronous Event Request test 00:21:15.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:15.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:15.596 Registering asynchronous event callbacks... 00:21:15.596 Starting namespace attribute notice tests for all controllers... 00:21:15.596 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:15.596 aer_cb - Changed Namespace 00:21:15.596 Cleaning up... 00:21:15.596 [ 00:21:15.596 { 00:21:15.596 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:15.596 "subtype": "Discovery", 00:21:15.596 "listen_addresses": [], 00:21:15.596 "allow_any_host": true, 00:21:15.596 "hosts": [] 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:15.596 "subtype": "NVMe", 00:21:15.596 "listen_addresses": [ 00:21:15.596 { 00:21:15.596 "trtype": "VFIOUSER", 00:21:15.596 "adrfam": "IPv4", 00:21:15.596 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:15.596 "trsvcid": "0" 00:21:15.596 } 00:21:15.596 ], 00:21:15.596 "allow_any_host": true, 00:21:15.596 "hosts": [], 00:21:15.596 "serial_number": "SPDK1", 00:21:15.596 "model_number": "SPDK bdev Controller", 00:21:15.596 "max_namespaces": 32, 00:21:15.596 "min_cntlid": 1, 00:21:15.596 "max_cntlid": 65519, 00:21:15.596 "namespaces": [ 00:21:15.596 { 00:21:15.596 "nsid": 1, 00:21:15.596 "bdev_name": "Malloc1", 00:21:15.596 "name": "Malloc1", 00:21:15.596 "nguid": "DA819657786D4C1398BE2451B7D0B8FD", 00:21:15.596 "uuid": "da819657-786d-4c13-98be-2451b7d0b8fd" 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "nsid": 2, 00:21:15.596 "bdev_name": "Malloc3", 00:21:15.596 "name": "Malloc3", 00:21:15.596 "nguid": "2428B10990534467BBCFA169BE404823", 00:21:15.596 "uuid": "2428b109-9053-4467-bbcf-a169be404823" 00:21:15.596 } 00:21:15.596 ] 00:21:15.596 }, 00:21:15.596 { 00:21:15.596 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:15.596 "subtype": "NVMe", 00:21:15.596 "listen_addresses": [ 00:21:15.596 { 00:21:15.596 "trtype": "VFIOUSER", 00:21:15.596 "adrfam": "IPv4", 00:21:15.596 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:15.596 "trsvcid": "0" 00:21:15.596 } 00:21:15.596 ], 00:21:15.596 "allow_any_host": true, 00:21:15.596 "hosts": [], 00:21:15.596 "serial_number": "SPDK2", 00:21:15.596 "model_number": "SPDK bdev Controller", 00:21:15.596 "max_namespaces": 32, 00:21:15.596 "min_cntlid": 1, 00:21:15.596 "max_cntlid": 65519, 00:21:15.596 "namespaces": [ 00:21:15.596 { 00:21:15.596 "nsid": 1, 00:21:15.596 "bdev_name": "Malloc2", 00:21:15.596 "name": "Malloc2", 00:21:15.596 "nguid": "082AE0DA39F9427BA9581B46F1AD48A9", 00:21:15.596 "uuid": "082ae0da-39f9-427b-a958-1b46f1ad48a9" 00:21:15.596 } 00:21:15.596 ] 00:21:15.596 } 00:21:15.596 ] 00:21:15.596 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 387154 00:21:15.596 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:15.596 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:15.596 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:21:15.596 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:15.596 [2024-12-10 00:03:00.059507] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:21:15.596 [2024-12-10 00:03:00.059546] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387307 ] 00:21:15.856 [2024-12-10 00:03:00.102412] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:21:15.856 [2024-12-10 00:03:00.107691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:15.856 [2024-12-10 00:03:00.107716] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f223c3f2000 00:21:15.856 [2024-12-10 00:03:00.108689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.109689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.110702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.111707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.112709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.113715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.114726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.115732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:15.856 [2024-12-10 00:03:00.116747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:15.856 [2024-12-10 00:03:00.116760] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f223c3e7000 00:21:15.856 [2024-12-10 00:03:00.117655] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:15.856 [2024-12-10 00:03:00.126873] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:21:15.856 [2024-12-10 00:03:00.126903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:21:15.856 [2024-12-10 00:03:00.132128] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:15.856 [2024-12-10 00:03:00.132175] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:15.856 [2024-12-10 00:03:00.132263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:21:15.856 [2024-12-10 00:03:00.132281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:21:15.856 [2024-12-10 00:03:00.132287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:21:15.856 [2024-12-10 00:03:00.133136] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:21:15.856 [2024-12-10 00:03:00.133151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:21:15.856 [2024-12-10 00:03:00.133160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:21:15.856 [2024-12-10 00:03:00.134141] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:15.856 [2024-12-10 00:03:00.134153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:21:15.857 [2024-12-10 00:03:00.134162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.135157] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:21:15.857 [2024-12-10 00:03:00.135169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.136167] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:21:15.857 [2024-12-10 00:03:00.136178] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:21:15.857 [2024-12-10 00:03:00.136187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.136196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.136306] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:21:15.857 [2024-12-10 00:03:00.136312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.136319] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:21:15.857 [2024-12-10 00:03:00.137178] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:21:15.857 [2024-12-10 00:03:00.138189] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:21:15.857 [2024-12-10 00:03:00.139195] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:15.857 [2024-12-10 00:03:00.140201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:15.857 [2024-12-10 00:03:00.140246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:15.857 [2024-12-10 00:03:00.141213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:21:15.857 [2024-12-10 00:03:00.141224] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:15.857 [2024-12-10 00:03:00.141230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.141249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:21:15.857 [2024-12-10 00:03:00.141260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.141281] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:15.857 [2024-12-10 00:03:00.141288] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:15.857 [2024-12-10 00:03:00.141292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.857 [2024-12-10 00:03:00.141306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.149833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.149853] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:21:15.857 [2024-12-10 00:03:00.149859] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:21:15.857 [2024-12-10 00:03:00.149865] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:21:15.857 [2024-12-10 00:03:00.149871] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:15.857 [2024-12-10 00:03:00.149878] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:21:15.857 [2024-12-10 00:03:00.149886] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:21:15.857 [2024-12-10 00:03:00.149892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.149902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.149914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.157832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.157847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.857 [2024-12-10 00:03:00.157857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.857 [2024-12-10 00:03:00.157866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.857 [2024-12-10 00:03:00.157875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.857 [2024-12-10 00:03:00.157881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.157895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.157905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.165832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.165845] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:21:15.857 [2024-12-10 00:03:00.165852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.165861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.165868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.165879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.173831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.173884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.173894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.173903] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:15.857 [2024-12-10 00:03:00.173910] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:15.857 [2024-12-10 00:03:00.173915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.857 [2024-12-10 00:03:00.173922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.181829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.181843] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:21:15.857 [2024-12-10 00:03:00.181860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.181869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.181877] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:15.857 [2024-12-10 00:03:00.181883] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:15.857 [2024-12-10 00:03:00.181888] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.857 [2024-12-10 00:03:00.181894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.189830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.189849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.189859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.189867] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:15.857 [2024-12-10 00:03:00.189873] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:15.857 [2024-12-10 00:03:00.189877] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.857 [2024-12-10 00:03:00.189884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:15.857 [2024-12-10 00:03:00.197830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:15.857 [2024-12-10 00:03:00.197843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:21:15.857 [2024-12-10 00:03:00.197889] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:21:15.858 [2024-12-10 00:03:00.197895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:21:15.858 [2024-12-10 00:03:00.197901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:21:15.858 [2024-12-10 00:03:00.197921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.205830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.205845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.213829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.213846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.221830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.221846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.229832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.229852] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:15.858 [2024-12-10 00:03:00.229858] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:15.858 [2024-12-10 00:03:00.229863] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:15.858 [2024-12-10 00:03:00.229867] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:15.858 [2024-12-10 00:03:00.229872] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:21:15.858 [2024-12-10 00:03:00.229879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:15.858 [2024-12-10 00:03:00.229887] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:15.858 [2024-12-10 00:03:00.229893] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:15.858 [2024-12-10 00:03:00.229897] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.858 [2024-12-10 00:03:00.229904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.229912] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:15.858 [2024-12-10 00:03:00.229918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:15.858 [2024-12-10 00:03:00.229922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.858 [2024-12-10 00:03:00.229928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.229937] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:15.858 [2024-12-10 00:03:00.229943] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:15.858 [2024-12-10 00:03:00.229947] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:15.858 [2024-12-10 00:03:00.229954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:15.858 [2024-12-10 00:03:00.237831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.237849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.237862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:15.858 [2024-12-10 00:03:00.237872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:15.858 ===================================================== 00:21:15.858 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:15.858 ===================================================== 00:21:15.858 Controller Capabilities/Features 00:21:15.858 ================================ 00:21:15.858 Vendor ID: 4e58 00:21:15.858 Subsystem Vendor ID: 4e58 00:21:15.858 Serial Number: SPDK2 00:21:15.858 Model Number: SPDK bdev Controller 00:21:15.858 Firmware Version: 25.01 00:21:15.858 Recommended Arb Burst: 6 00:21:15.858 IEEE OUI Identifier: 8d 6b 50 00:21:15.858 Multi-path I/O 00:21:15.858 May have multiple subsystem ports: Yes 00:21:15.858 May have multiple controllers: Yes 00:21:15.858 Associated with SR-IOV VF: No 00:21:15.858 Max Data Transfer Size: 131072 00:21:15.858 Max Number of Namespaces: 32 00:21:15.858 Max Number of I/O Queues: 127 00:21:15.858 NVMe Specification Version (VS): 1.3 00:21:15.858 NVMe Specification Version (Identify): 1.3 00:21:15.858 Maximum Queue Entries: 256 00:21:15.858 Contiguous Queues Required: Yes 00:21:15.858 Arbitration Mechanisms Supported 00:21:15.858 Weighted Round Robin: Not Supported 00:21:15.858 Vendor Specific: Not Supported 00:21:15.858 Reset Timeout: 15000 ms 00:21:15.858 Doorbell Stride: 4 bytes 00:21:15.858 NVM Subsystem Reset: Not Supported 00:21:15.858 Command Sets Supported 00:21:15.858 NVM Command Set: Supported 00:21:15.858 Boot Partition: Not Supported 00:21:15.858 Memory Page Size Minimum: 4096 bytes 00:21:15.858 Memory Page Size Maximum: 4096 bytes 00:21:15.858 Persistent Memory Region: Not Supported 00:21:15.858 Optional Asynchronous Events Supported 00:21:15.858 Namespace Attribute Notices: Supported 00:21:15.858 Firmware Activation Notices: Not Supported 00:21:15.858 ANA Change Notices: Not Supported 00:21:15.858 PLE Aggregate Log Change Notices: Not Supported 00:21:15.858 LBA Status Info Alert Notices: Not Supported 00:21:15.858 EGE Aggregate Log Change Notices: Not Supported 00:21:15.858 Normal NVM Subsystem Shutdown event: Not Supported 00:21:15.858 Zone Descriptor Change Notices: Not Supported 00:21:15.858 Discovery Log Change Notices: Not Supported 00:21:15.858 Controller Attributes 00:21:15.858 128-bit Host Identifier: Supported 00:21:15.858 Non-Operational Permissive Mode: Not Supported 00:21:15.858 NVM Sets: Not Supported 00:21:15.858 Read Recovery Levels: Not Supported 00:21:15.858 Endurance Groups: Not Supported 00:21:15.858 Predictable Latency Mode: Not Supported 00:21:15.858 Traffic Based Keep ALive: Not Supported 00:21:15.858 Namespace Granularity: Not Supported 00:21:15.858 SQ Associations: Not Supported 00:21:15.858 UUID List: Not Supported 00:21:15.858 Multi-Domain Subsystem: Not Supported 00:21:15.858 Fixed Capacity Management: Not Supported 00:21:15.858 Variable Capacity Management: Not Supported 00:21:15.858 Delete Endurance Group: Not Supported 00:21:15.858 Delete NVM Set: Not Supported 00:21:15.858 Extended LBA Formats Supported: Not Supported 00:21:15.858 Flexible Data Placement Supported: Not Supported 00:21:15.858 00:21:15.858 Controller Memory Buffer Support 00:21:15.858 ================================ 00:21:15.858 Supported: No 00:21:15.858 00:21:15.858 Persistent Memory Region Support 00:21:15.858 ================================ 00:21:15.858 Supported: No 00:21:15.858 00:21:15.858 Admin Command Set Attributes 00:21:15.858 ============================ 00:21:15.858 Security Send/Receive: Not Supported 00:21:15.858 Format NVM: Not Supported 00:21:15.858 Firmware Activate/Download: Not Supported 00:21:15.858 Namespace Management: Not Supported 00:21:15.858 Device Self-Test: Not Supported 00:21:15.858 Directives: Not Supported 00:21:15.858 NVMe-MI: Not Supported 00:21:15.858 Virtualization Management: Not Supported 00:21:15.858 Doorbell Buffer Config: Not Supported 00:21:15.858 Get LBA Status Capability: Not Supported 00:21:15.858 Command & Feature Lockdown Capability: Not Supported 00:21:15.858 Abort Command Limit: 4 00:21:15.858 Async Event Request Limit: 4 00:21:15.858 Number of Firmware Slots: N/A 00:21:15.858 Firmware Slot 1 Read-Only: N/A 00:21:15.858 Firmware Activation Without Reset: N/A 00:21:15.858 Multiple Update Detection Support: N/A 00:21:15.858 Firmware Update Granularity: No Information Provided 00:21:15.858 Per-Namespace SMART Log: No 00:21:15.858 Asymmetric Namespace Access Log Page: Not Supported 00:21:15.858 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:21:15.858 Command Effects Log Page: Supported 00:21:15.858 Get Log Page Extended Data: Supported 00:21:15.858 Telemetry Log Pages: Not Supported 00:21:15.858 Persistent Event Log Pages: Not Supported 00:21:15.858 Supported Log Pages Log Page: May Support 00:21:15.858 Commands Supported & Effects Log Page: Not Supported 00:21:15.858 Feature Identifiers & Effects Log Page:May Support 00:21:15.858 NVMe-MI Commands & Effects Log Page: May Support 00:21:15.858 Data Area 4 for Telemetry Log: Not Supported 00:21:15.858 Error Log Page Entries Supported: 128 00:21:15.858 Keep Alive: Supported 00:21:15.858 Keep Alive Granularity: 10000 ms 00:21:15.858 00:21:15.858 NVM Command Set Attributes 00:21:15.858 ========================== 00:21:15.858 Submission Queue Entry Size 00:21:15.858 Max: 64 00:21:15.858 Min: 64 00:21:15.858 Completion Queue Entry Size 00:21:15.858 Max: 16 00:21:15.858 Min: 16 00:21:15.858 Number of Namespaces: 32 00:21:15.858 Compare Command: Supported 00:21:15.859 Write Uncorrectable Command: Not Supported 00:21:15.859 Dataset Management Command: Supported 00:21:15.859 Write Zeroes Command: Supported 00:21:15.859 Set Features Save Field: Not Supported 00:21:15.859 Reservations: Not Supported 00:21:15.859 Timestamp: Not Supported 00:21:15.859 Copy: Supported 00:21:15.859 Volatile Write Cache: Present 00:21:15.859 Atomic Write Unit (Normal): 1 00:21:15.859 Atomic Write Unit (PFail): 1 00:21:15.859 Atomic Compare & Write Unit: 1 00:21:15.859 Fused Compare & Write: Supported 00:21:15.859 Scatter-Gather List 00:21:15.859 SGL Command Set: Supported (Dword aligned) 00:21:15.859 SGL Keyed: Not Supported 00:21:15.859 SGL Bit Bucket Descriptor: Not Supported 00:21:15.859 SGL Metadata Pointer: Not Supported 00:21:15.859 Oversized SGL: Not Supported 00:21:15.859 SGL Metadata Address: Not Supported 00:21:15.859 SGL Offset: Not Supported 00:21:15.859 Transport SGL Data Block: Not Supported 00:21:15.859 Replay Protected Memory Block: Not Supported 00:21:15.859 00:21:15.859 Firmware Slot Information 00:21:15.859 ========================= 00:21:15.859 Active slot: 1 00:21:15.859 Slot 1 Firmware Revision: 25.01 00:21:15.859 00:21:15.859 00:21:15.859 Commands Supported and Effects 00:21:15.859 ============================== 00:21:15.859 Admin Commands 00:21:15.859 -------------- 00:21:15.859 Get Log Page (02h): Supported 00:21:15.859 Identify (06h): Supported 00:21:15.859 Abort (08h): Supported 00:21:15.859 Set Features (09h): Supported 00:21:15.859 Get Features (0Ah): Supported 00:21:15.859 Asynchronous Event Request (0Ch): Supported 00:21:15.859 Keep Alive (18h): Supported 00:21:15.859 I/O Commands 00:21:15.859 ------------ 00:21:15.859 Flush (00h): Supported LBA-Change 00:21:15.859 Write (01h): Supported LBA-Change 00:21:15.859 Read (02h): Supported 00:21:15.859 Compare (05h): Supported 00:21:15.859 Write Zeroes (08h): Supported LBA-Change 00:21:15.859 Dataset Management (09h): Supported LBA-Change 00:21:15.859 Copy (19h): Supported LBA-Change 00:21:15.859 00:21:15.859 Error Log 00:21:15.859 ========= 00:21:15.859 00:21:15.859 Arbitration 00:21:15.859 =========== 00:21:15.859 Arbitration Burst: 1 00:21:15.859 00:21:15.859 Power Management 00:21:15.859 ================ 00:21:15.859 Number of Power States: 1 00:21:15.859 Current Power State: Power State #0 00:21:15.859 Power State #0: 00:21:15.859 Max Power: 0.00 W 00:21:15.859 Non-Operational State: Operational 00:21:15.859 Entry Latency: Not Reported 00:21:15.859 Exit Latency: Not Reported 00:21:15.859 Relative Read Throughput: 0 00:21:15.859 Relative Read Latency: 0 00:21:15.859 Relative Write Throughput: 0 00:21:15.859 Relative Write Latency: 0 00:21:15.859 Idle Power: Not Reported 00:21:15.859 Active Power: Not Reported 00:21:15.859 Non-Operational Permissive Mode: Not Supported 00:21:15.859 00:21:15.859 Health Information 00:21:15.859 ================== 00:21:15.859 Critical Warnings: 00:21:15.859 Available Spare Space: OK 00:21:15.859 Temperature: OK 00:21:15.859 Device Reliability: OK 00:21:15.859 Read Only: No 00:21:15.859 Volatile Memory Backup: OK 00:21:15.859 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:15.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:15.859 Available Spare: 0% 00:21:15.859 Available Sp[2024-12-10 00:03:00.237962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:15.859 [2024-12-10 00:03:00.245830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:15.859 [2024-12-10 00:03:00.245867] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:21:15.859 [2024-12-10 00:03:00.245878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.859 [2024-12-10 00:03:00.245886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.859 [2024-12-10 00:03:00.245894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.859 [2024-12-10 00:03:00.245902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.859 [2024-12-10 00:03:00.245955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:15.859 [2024-12-10 00:03:00.245969] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:21:15.859 [2024-12-10 00:03:00.246961] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:15.859 [2024-12-10 00:03:00.247008] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:21:15.859 [2024-12-10 00:03:00.247017] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:21:15.859 [2024-12-10 00:03:00.247963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:21:15.859 [2024-12-10 00:03:00.247977] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:21:15.859 [2024-12-10 00:03:00.248033] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:21:15.859 [2024-12-10 00:03:00.248986] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:15.859 are Threshold: 0% 00:21:15.859 Life Percentage Used: 0% 00:21:15.859 Data Units Read: 0 00:21:15.859 Data Units Written: 0 00:21:15.859 Host Read Commands: 0 00:21:15.859 Host Write Commands: 0 00:21:15.859 Controller Busy Time: 0 minutes 00:21:15.859 Power Cycles: 0 00:21:15.859 Power On Hours: 0 hours 00:21:15.859 Unsafe Shutdowns: 0 00:21:15.859 Unrecoverable Media Errors: 0 00:21:15.859 Lifetime Error Log Entries: 0 00:21:15.859 Warning Temperature Time: 0 minutes 00:21:15.859 Critical Temperature Time: 0 minutes 00:21:15.859 00:21:15.859 Number of Queues 00:21:15.859 ================ 00:21:15.859 Number of I/O Submission Queues: 127 00:21:15.859 Number of I/O Completion Queues: 127 00:21:15.859 00:21:15.859 Active Namespaces 00:21:15.859 ================= 00:21:15.859 Namespace ID:1 00:21:15.859 Error Recovery Timeout: Unlimited 00:21:15.859 Command Set Identifier: NVM (00h) 00:21:15.859 Deallocate: Supported 00:21:15.859 Deallocated/Unwritten Error: Not Supported 00:21:15.859 Deallocated Read Value: Unknown 00:21:15.859 Deallocate in Write Zeroes: Not Supported 00:21:15.859 Deallocated Guard Field: 0xFFFF 00:21:15.859 Flush: Supported 00:21:15.859 Reservation: Supported 00:21:15.859 Namespace Sharing Capabilities: Multiple Controllers 00:21:15.859 Size (in LBAs): 131072 (0GiB) 00:21:15.859 Capacity (in LBAs): 131072 (0GiB) 00:21:15.859 Utilization (in LBAs): 131072 (0GiB) 00:21:15.859 NGUID: 082AE0DA39F9427BA9581B46F1AD48A9 00:21:15.859 UUID: 082ae0da-39f9-427b-a958-1b46f1ad48a9 00:21:15.859 Thin Provisioning: Not Supported 00:21:15.859 Per-NS Atomic Units: Yes 00:21:15.859 Atomic Boundary Size (Normal): 0 00:21:15.859 Atomic Boundary Size (PFail): 0 00:21:15.859 Atomic Boundary Offset: 0 00:21:15.859 Maximum Single Source Range Length: 65535 00:21:15.859 Maximum Copy Length: 65535 00:21:15.859 Maximum Source Range Count: 1 00:21:15.859 NGUID/EUI64 Never Reused: No 00:21:15.859 Namespace Write Protected: No 00:21:15.859 Number of LBA Formats: 1 00:21:15.859 Current LBA Format: LBA Format #00 00:21:15.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:15.859 00:21:15.859 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:16.116 [2024-12-10 00:03:00.483992] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:21.381 Initializing NVMe Controllers 00:21:21.381 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:21.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:21.381 Initialization complete. Launching workers. 00:21:21.381 ======================================================== 00:21:21.381 Latency(us) 00:21:21.381 Device Information : IOPS MiB/s Average min max 00:21:21.381 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39928.02 155.97 3205.60 953.07 6705.08 00:21:21.381 ======================================================== 00:21:21.381 Total : 39928.02 155.97 3205.60 953.07 6705.08 00:21:21.381 00:21:21.381 [2024-12-10 00:03:05.593076] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:21.381 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:21.381 [2024-12-10 00:03:05.823786] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:26.673 Initializing NVMe Controllers 00:21:26.673 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:26.673 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:26.673 Initialization complete. Launching workers. 00:21:26.673 ======================================================== 00:21:26.673 Latency(us) 00:21:26.673 Device Information : IOPS MiB/s Average min max 00:21:26.673 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39905.95 155.88 3207.38 953.61 10611.90 00:21:26.673 ======================================================== 00:21:26.673 Total : 39905.95 155.88 3207.38 953.61 10611.90 00:21:26.673 00:21:26.673 [2024-12-10 00:03:10.845972] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:26.673 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:26.673 [2024-12-10 00:03:11.077117] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:31.958 [2024-12-10 00:03:16.217925] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:31.958 Initializing NVMe Controllers 00:21:31.958 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:31.958 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:31.958 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:21:31.958 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:21:31.958 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:21:31.958 Initialization complete. Launching workers. 00:21:31.958 Starting thread on core 2 00:21:31.958 Starting thread on core 3 00:21:31.958 Starting thread on core 1 00:21:31.958 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:21:32.217 [2024-12-10 00:03:16.523858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:35.529 [2024-12-10 00:03:19.578170] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:35.529 Initializing NVMe Controllers 00:21:35.529 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:35.529 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:35.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:21:35.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:21:35.529 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:21:35.530 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:21:35.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:35.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:35.530 Initialization complete. Launching workers. 00:21:35.530 Starting thread on core 1 with urgent priority queue 00:21:35.530 Starting thread on core 2 with urgent priority queue 00:21:35.530 Starting thread on core 3 with urgent priority queue 00:21:35.530 Starting thread on core 0 with urgent priority queue 00:21:35.530 SPDK bdev Controller (SPDK2 ) core 0: 8941.67 IO/s 11.18 secs/100000 ios 00:21:35.530 SPDK bdev Controller (SPDK2 ) core 1: 8487.33 IO/s 11.78 secs/100000 ios 00:21:35.530 SPDK bdev Controller (SPDK2 ) core 2: 7862.00 IO/s 12.72 secs/100000 ios 00:21:35.530 SPDK bdev Controller (SPDK2 ) core 3: 8605.00 IO/s 11.62 secs/100000 ios 00:21:35.530 ======================================================== 00:21:35.530 00:21:35.530 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:35.530 [2024-12-10 00:03:19.882851] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:35.530 Initializing NVMe Controllers 00:21:35.530 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:35.530 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:35.530 Namespace ID: 1 size: 0GB 00:21:35.530 Initialization complete. 00:21:35.530 INFO: using host memory buffer for IO 00:21:35.530 Hello world! 00:21:35.530 [2024-12-10 00:03:19.892935] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:35.530 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:35.792 [2024-12-10 00:03:20.197685] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:37.175 Initializing NVMe Controllers 00:21:37.176 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:37.176 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:37.176 Initialization complete. Launching workers. 00:21:37.176 submit (in ns) avg, min, max = 6705.0, 3088.8, 4011980.0 00:21:37.176 complete (in ns) avg, min, max = 22909.4, 1710.4, 5991529.6 00:21:37.176 00:21:37.176 Submit histogram 00:21:37.176 ================ 00:21:37.176 Range in us Cumulative Count 00:21:37.176 3.085 - 3.098: 0.0120% ( 2) 00:21:37.176 3.098 - 3.110: 0.1024% ( 15) 00:21:37.176 3.110 - 3.123: 0.2470% ( 24) 00:21:37.176 3.123 - 3.136: 0.5723% ( 54) 00:21:37.176 3.136 - 3.149: 1.0181% ( 74) 00:21:37.176 3.149 - 3.162: 1.7230% ( 117) 00:21:37.176 3.162 - 3.174: 3.2532% ( 254) 00:21:37.176 3.174 - 3.187: 5.7895% ( 421) 00:21:37.176 3.187 - 3.200: 9.5427% ( 623) 00:21:37.176 3.200 - 3.213: 13.6876% ( 688) 00:21:37.176 3.213 - 3.226: 19.4409% ( 955) 00:21:37.176 3.226 - 3.238: 25.5076% ( 1007) 00:21:37.176 3.238 - 3.251: 31.7067% ( 1029) 00:21:37.176 3.251 - 3.264: 37.8396% ( 1018) 00:21:37.176 3.264 - 3.277: 43.8581% ( 999) 00:21:37.176 3.277 - 3.302: 54.5274% ( 1771) 00:21:37.176 3.302 - 3.328: 60.4314% ( 980) 00:21:37.176 3.328 - 3.354: 66.4438% ( 998) 00:21:37.176 3.354 - 3.379: 71.3055% ( 807) 00:21:37.176 3.379 - 3.405: 77.2095% ( 980) 00:21:37.176 3.405 - 3.430: 85.0051% ( 1294) 00:21:37.176 3.430 - 3.456: 87.6318% ( 436) 00:21:37.176 3.456 - 3.482: 88.4571% ( 137) 00:21:37.176 3.482 - 3.507: 89.0475% ( 98) 00:21:37.176 3.507 - 3.533: 90.2103% ( 193) 00:21:37.176 3.533 - 3.558: 92.0477% ( 305) 00:21:37.176 3.558 - 3.584: 93.8189% ( 294) 00:21:37.176 3.584 - 3.610: 95.2166% ( 232) 00:21:37.176 3.610 - 3.635: 96.2769% ( 176) 00:21:37.176 3.635 - 3.661: 97.2890% ( 168) 00:21:37.176 3.661 - 3.686: 98.3854% ( 182) 00:21:37.176 3.686 - 3.712: 98.9517% ( 94) 00:21:37.176 3.712 - 3.738: 99.2951% ( 57) 00:21:37.176 3.738 - 3.763: 99.5301% ( 39) 00:21:37.176 3.763 - 3.789: 99.6385% ( 18) 00:21:37.176 3.789 - 3.814: 99.6807% ( 7) 00:21:37.176 3.814 - 3.840: 99.6928% ( 2) 00:21:37.176 3.840 - 3.866: 99.6988% ( 1) 00:21:37.176 3.917 - 3.942: 99.7108% ( 2) 00:21:37.176 3.968 - 3.994: 99.7169% ( 1) 00:21:37.176 4.275 - 4.301: 99.7229% ( 1) 00:21:37.176 5.990 - 6.016: 99.7289% ( 1) 00:21:37.176 6.144 - 6.170: 99.7349% ( 1) 00:21:37.176 6.374 - 6.400: 99.7409% ( 1) 00:21:37.176 6.451 - 6.477: 99.7470% ( 1) 00:21:37.176 6.605 - 6.656: 99.7650% ( 3) 00:21:37.176 6.656 - 6.707: 99.7711% ( 1) 00:21:37.176 6.912 - 6.963: 99.7891% ( 3) 00:21:37.176 6.963 - 7.014: 99.7952% ( 1) 00:21:37.176 7.066 - 7.117: 99.8072% ( 2) 00:21:37.176 7.117 - 7.168: 99.8132% ( 1) 00:21:37.176 7.270 - 7.322: 99.8193% ( 1) 00:21:37.176 7.322 - 7.373: 99.8253% ( 1) 00:21:37.176 7.373 - 7.424: 99.8313% ( 1) 00:21:37.176 7.424 - 7.475: 99.8373% ( 1) 00:21:37.176 7.475 - 7.526: 99.8434% ( 1) 00:21:37.176 7.526 - 7.578: 99.8494% ( 1) 00:21:37.176 7.629 - 7.680: 99.8614% ( 2) 00:21:37.176 7.680 - 7.731: 99.8735% ( 2) 00:21:37.176 7.782 - 7.834: 99.8795% ( 1) 00:21:37.176 7.834 - 7.885: 99.8855% ( 1) 00:21:37.176 7.987 - 8.038: 99.8916% ( 1) 00:21:37.176 8.192 - 8.243: 99.8976% ( 1) 00:21:37.176 8.755 - 8.806: 99.9036% ( 1) 00:21:37.176 10.342 - 10.394: 99.9096% ( 1) 00:21:37.176 17.101 - 17.203: 99.9157% ( 1) 00:21:37.176 3984.589 - 4010.803: 99.9940% ( 13) 00:21:37.176 4010.803 - 4037.018: 100.0000% ( 1) 00:21:37.176 00:21:37.176 Complete histogram 00:21:37.176 ================== 00:21:37.176 Range in us Cumulative Count 00:21:37.176 1.702 - 1.715: 0.2109% ( 35) 00:21:37.176 1.715 - 1.728: 17.8384% ( 2926) 00:21:37.176 1.728 - 1.741: 67.0522% ( 8169) 00:21:37.176 1.741 - 1.754: 77.1312% ( 1673) 00:21:37.176 1.754 - 1.766: 80.2880% ( 524) 00:21:37.176 1.766 - 1.779: 81.5893% ( 216) 00:21:37.176 1.779 - 1.792: 89.4933% ( 1312) 00:21:37.176 1.792 - 1.805: 96.8010% ( 1213) 00:21:37.176 1.805 - 1.818: 98.4156% ( 268) 00:21:37.176 1.818 - [2024-12-10 00:03:21.288613] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:37.176 1.830: 98.8975% ( 80) 00:21:37.176 1.830 - 1.843: 98.9999% ( 17) 00:21:37.176 1.843 - 1.856: 99.0542% ( 9) 00:21:37.176 1.856 - 1.869: 99.1024% ( 8) 00:21:37.176 1.869 - 1.882: 99.1204% ( 3) 00:21:37.176 1.882 - 1.894: 99.1445% ( 4) 00:21:37.176 1.894 - 1.907: 99.1566% ( 2) 00:21:37.176 1.946 - 1.958: 99.1626% ( 1) 00:21:37.176 1.958 - 1.971: 99.1686% ( 1) 00:21:37.176 1.997 - 2.010: 99.1746% ( 1) 00:21:37.176 2.010 - 2.022: 99.1807% ( 1) 00:21:37.176 2.022 - 2.035: 99.1867% ( 1) 00:21:37.176 2.061 - 2.074: 99.1927% ( 1) 00:21:37.176 2.074 - 2.086: 99.2048% ( 2) 00:21:37.176 2.099 - 2.112: 99.2108% ( 1) 00:21:37.176 2.150 - 2.163: 99.2228% ( 2) 00:21:37.176 2.163 - 2.176: 99.2349% ( 2) 00:21:37.176 2.176 - 2.189: 99.2409% ( 1) 00:21:37.176 2.214 - 2.227: 99.2530% ( 2) 00:21:37.176 2.266 - 2.278: 99.2590% ( 1) 00:21:37.176 2.368 - 2.381: 99.2650% ( 1) 00:21:37.176 4.506 - 4.531: 99.2710% ( 1) 00:21:37.176 4.685 - 4.710: 99.2771% ( 1) 00:21:37.176 4.787 - 4.813: 99.2831% ( 1) 00:21:37.176 4.915 - 4.941: 99.2891% ( 1) 00:21:37.176 5.171 - 5.197: 99.2951% ( 1) 00:21:37.176 5.222 - 5.248: 99.3012% ( 1) 00:21:37.176 5.248 - 5.274: 99.3072% ( 1) 00:21:37.176 5.274 - 5.299: 99.3132% ( 1) 00:21:37.176 5.299 - 5.325: 99.3192% ( 1) 00:21:37.176 5.453 - 5.478: 99.3313% ( 2) 00:21:37.176 5.530 - 5.555: 99.3433% ( 2) 00:21:37.176 5.632 - 5.658: 99.3494% ( 1) 00:21:37.176 5.683 - 5.709: 99.3554% ( 1) 00:21:37.176 5.760 - 5.786: 99.3614% ( 1) 00:21:37.176 5.811 - 5.837: 99.3674% ( 1) 00:21:37.176 5.862 - 5.888: 99.3735% ( 1) 00:21:37.176 5.888 - 5.914: 99.3795% ( 1) 00:21:37.176 5.914 - 5.939: 99.3855% ( 1) 00:21:37.176 5.990 - 6.016: 99.3976% ( 2) 00:21:37.176 6.042 - 6.067: 99.4036% ( 1) 00:21:37.176 6.067 - 6.093: 99.4096% ( 1) 00:21:37.176 6.118 - 6.144: 99.4156% ( 1) 00:21:37.176 6.298 - 6.323: 99.4217% ( 1) 00:21:37.176 6.374 - 6.400: 99.4337% ( 2) 00:21:37.176 6.451 - 6.477: 99.4397% ( 1) 00:21:37.176 6.502 - 6.528: 99.4457% ( 1) 00:21:37.176 7.373 - 7.424: 99.4518% ( 1) 00:21:37.176 14.643 - 14.746: 99.4578% ( 1) 00:21:37.176 19.251 - 19.354: 99.4638% ( 1) 00:21:37.176 47.718 - 47.923: 99.4698% ( 1) 00:21:37.176 1507.328 - 1513.882: 99.4759% ( 1) 00:21:37.176 3984.589 - 4010.803: 99.9940% ( 86) 00:21:37.176 5976.883 - 6003.098: 100.0000% ( 1) 00:21:37.176 00:21:37.176 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:21:37.176 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:37.176 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:21:37.176 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:21:37.176 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:37.176 [ 00:21:37.176 { 00:21:37.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:37.176 "subtype": "Discovery", 00:21:37.176 "listen_addresses": [], 00:21:37.176 "allow_any_host": true, 00:21:37.176 "hosts": [] 00:21:37.176 }, 00:21:37.176 { 00:21:37.176 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:37.176 "subtype": "NVMe", 00:21:37.176 "listen_addresses": [ 00:21:37.176 { 00:21:37.176 "trtype": "VFIOUSER", 00:21:37.176 "adrfam": "IPv4", 00:21:37.176 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:37.176 "trsvcid": "0" 00:21:37.176 } 00:21:37.176 ], 00:21:37.176 "allow_any_host": true, 00:21:37.176 "hosts": [], 00:21:37.176 "serial_number": "SPDK1", 00:21:37.176 "model_number": "SPDK bdev Controller", 00:21:37.176 "max_namespaces": 32, 00:21:37.176 "min_cntlid": 1, 00:21:37.176 "max_cntlid": 65519, 00:21:37.176 "namespaces": [ 00:21:37.176 { 00:21:37.176 "nsid": 1, 00:21:37.176 "bdev_name": "Malloc1", 00:21:37.176 "name": "Malloc1", 00:21:37.176 "nguid": "DA819657786D4C1398BE2451B7D0B8FD", 00:21:37.176 "uuid": "da819657-786d-4c13-98be-2451b7d0b8fd" 00:21:37.176 }, 00:21:37.176 { 00:21:37.176 "nsid": 2, 00:21:37.176 "bdev_name": "Malloc3", 00:21:37.176 "name": "Malloc3", 00:21:37.177 "nguid": "2428B10990534467BBCFA169BE404823", 00:21:37.177 "uuid": "2428b109-9053-4467-bbcf-a169be404823" 00:21:37.177 } 00:21:37.177 ] 00:21:37.177 }, 00:21:37.177 { 00:21:37.177 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:37.177 "subtype": "NVMe", 00:21:37.177 "listen_addresses": [ 00:21:37.177 { 00:21:37.177 "trtype": "VFIOUSER", 00:21:37.177 "adrfam": "IPv4", 00:21:37.177 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:37.177 "trsvcid": "0" 00:21:37.177 } 00:21:37.177 ], 00:21:37.177 "allow_any_host": true, 00:21:37.177 "hosts": [], 00:21:37.177 "serial_number": "SPDK2", 00:21:37.177 "model_number": "SPDK bdev Controller", 00:21:37.177 "max_namespaces": 32, 00:21:37.177 "min_cntlid": 1, 00:21:37.177 "max_cntlid": 65519, 00:21:37.177 "namespaces": [ 00:21:37.177 { 00:21:37.177 "nsid": 1, 00:21:37.177 "bdev_name": "Malloc2", 00:21:37.177 "name": "Malloc2", 00:21:37.177 "nguid": "082AE0DA39F9427BA9581B46F1AD48A9", 00:21:37.177 "uuid": "082ae0da-39f9-427b-a958-1b46f1ad48a9" 00:21:37.177 } 00:21:37.177 ] 00:21:37.177 } 00:21:37.177 ] 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=391402 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:21:37.177 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:37.439 [2024-12-10 00:03:21.694227] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:37.439 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:37.439 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:37.439 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:21:37.439 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:37.439 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:21:37.703 Malloc4 00:21:37.703 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:21:37.703 [2024-12-10 00:03:22.144575] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:37.703 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:37.967 Asynchronous Event Request test 00:21:37.967 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:37.967 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:37.967 Registering asynchronous event callbacks... 00:21:37.967 Starting namespace attribute notice tests for all controllers... 00:21:37.967 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:37.967 aer_cb - Changed Namespace 00:21:37.967 Cleaning up... 00:21:37.967 [ 00:21:37.967 { 00:21:37.967 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:37.967 "subtype": "Discovery", 00:21:37.967 "listen_addresses": [], 00:21:37.967 "allow_any_host": true, 00:21:37.967 "hosts": [] 00:21:37.967 }, 00:21:37.967 { 00:21:37.967 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:37.967 "subtype": "NVMe", 00:21:37.967 "listen_addresses": [ 00:21:37.967 { 00:21:37.967 "trtype": "VFIOUSER", 00:21:37.967 "adrfam": "IPv4", 00:21:37.967 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:37.967 "trsvcid": "0" 00:21:37.967 } 00:21:37.967 ], 00:21:37.967 "allow_any_host": true, 00:21:37.967 "hosts": [], 00:21:37.967 "serial_number": "SPDK1", 00:21:37.967 "model_number": "SPDK bdev Controller", 00:21:37.967 "max_namespaces": 32, 00:21:37.967 "min_cntlid": 1, 00:21:37.967 "max_cntlid": 65519, 00:21:37.967 "namespaces": [ 00:21:37.967 { 00:21:37.967 "nsid": 1, 00:21:37.967 "bdev_name": "Malloc1", 00:21:37.967 "name": "Malloc1", 00:21:37.967 "nguid": "DA819657786D4C1398BE2451B7D0B8FD", 00:21:37.967 "uuid": "da819657-786d-4c13-98be-2451b7d0b8fd" 00:21:37.967 }, 00:21:37.967 { 00:21:37.967 "nsid": 2, 00:21:37.967 "bdev_name": "Malloc3", 00:21:37.967 "name": "Malloc3", 00:21:37.967 "nguid": "2428B10990534467BBCFA169BE404823", 00:21:37.967 "uuid": "2428b109-9053-4467-bbcf-a169be404823" 00:21:37.967 } 00:21:37.967 ] 00:21:37.967 }, 00:21:37.967 { 00:21:37.967 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:37.967 "subtype": "NVMe", 00:21:37.967 "listen_addresses": [ 00:21:37.967 { 00:21:37.967 "trtype": "VFIOUSER", 00:21:37.967 "adrfam": "IPv4", 00:21:37.967 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:37.967 "trsvcid": "0" 00:21:37.967 } 00:21:37.967 ], 00:21:37.967 "allow_any_host": true, 00:21:37.967 "hosts": [], 00:21:37.967 "serial_number": "SPDK2", 00:21:37.967 "model_number": "SPDK bdev Controller", 00:21:37.967 "max_namespaces": 32, 00:21:37.967 "min_cntlid": 1, 00:21:37.967 "max_cntlid": 65519, 00:21:37.967 "namespaces": [ 00:21:37.967 { 00:21:37.967 "nsid": 1, 00:21:37.967 "bdev_name": "Malloc2", 00:21:37.967 "name": "Malloc2", 00:21:37.967 "nguid": "082AE0DA39F9427BA9581B46F1AD48A9", 00:21:37.967 "uuid": "082ae0da-39f9-427b-a958-1b46f1ad48a9" 00:21:37.967 }, 00:21:37.967 { 00:21:37.967 "nsid": 2, 00:21:37.967 "bdev_name": "Malloc4", 00:21:37.967 "name": "Malloc4", 00:21:37.967 "nguid": "C614B5DFA02347CABFC95592A9A359D7", 00:21:37.967 "uuid": "c614b5df-a023-47ca-bfc9-5592a9a359d7" 00:21:37.967 } 00:21:37.967 ] 00:21:37.967 } 00:21:37.967 ] 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 391402 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 382909 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 382909 ']' 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 382909 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382909 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382909' 00:21:37.967 killing process with pid 382909 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 382909 00:21:37.967 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 382909 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=391672 00:21:38.230 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 391672' 00:21:38.230 Process pid: 391672 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 391672 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 391672 ']' 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.231 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:38.498 [2024-12-10 00:03:22.726179] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:38.498 [2024-12-10 00:03:22.727105] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:21:38.498 [2024-12-10 00:03:22.727145] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.498 [2024-12-10 00:03:22.811985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.498 [2024-12-10 00:03:22.851554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.498 [2024-12-10 00:03:22.851590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.498 [2024-12-10 00:03:22.851601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.498 [2024-12-10 00:03:22.851610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.498 [2024-12-10 00:03:22.851617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.498 [2024-12-10 00:03:22.853195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.498 [2024-12-10 00:03:22.853212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.498 [2024-12-10 00:03:22.853330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.498 [2024-12-10 00:03:22.853331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.498 [2024-12-10 00:03:22.922636] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:38.498 [2024-12-10 00:03:22.922753] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:38.498 [2024-12-10 00:03:22.923193] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:38.498 [2024-12-10 00:03:22.923399] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:21:38.498 [2024-12-10 00:03:22.923464] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:39.450 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.450 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:21:39.450 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:40.442 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:40.707 Malloc1 00:21:40.707 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:40.973 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:40.973 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:41.245 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:41.245 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:41.245 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:41.511 Malloc2 00:21:41.512 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:41.775 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:41.775 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 391672 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 391672 ']' 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 391672 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391672 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391672' 00:21:42.051 killing process with pid 391672 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 391672 00:21:42.051 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 391672 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:42.351 00:21:42.351 real 0m52.664s 00:21:42.351 user 3m21.094s 00:21:42.351 sys 0m3.944s 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:42.351 ************************************ 00:21:42.351 END TEST nvmf_vfio_user 00:21:42.351 ************************************ 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:42.351 ************************************ 00:21:42.351 START TEST nvmf_vfio_user_nvme_compliance 00:21:42.351 ************************************ 00:21:42.351 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:42.630 * Looking for test storage... 00:21:42.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.630 --rc genhtml_branch_coverage=1 00:21:42.630 --rc genhtml_function_coverage=1 00:21:42.630 --rc genhtml_legend=1 00:21:42.630 --rc geninfo_all_blocks=1 00:21:42.630 --rc geninfo_unexecuted_blocks=1 00:21:42.630 00:21:42.630 ' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.630 --rc genhtml_branch_coverage=1 00:21:42.630 --rc genhtml_function_coverage=1 00:21:42.630 --rc genhtml_legend=1 00:21:42.630 --rc geninfo_all_blocks=1 00:21:42.630 --rc geninfo_unexecuted_blocks=1 00:21:42.630 00:21:42.630 ' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.630 --rc genhtml_branch_coverage=1 00:21:42.630 --rc genhtml_function_coverage=1 00:21:42.630 --rc genhtml_legend=1 00:21:42.630 --rc geninfo_all_blocks=1 00:21:42.630 --rc geninfo_unexecuted_blocks=1 00:21:42.630 00:21:42.630 ' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.630 --rc genhtml_branch_coverage=1 00:21:42.630 --rc genhtml_function_coverage=1 00:21:42.630 --rc genhtml_legend=1 00:21:42.630 --rc geninfo_all_blocks=1 00:21:42.630 --rc geninfo_unexecuted_blocks=1 00:21:42.630 00:21:42.630 ' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.630 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=392561 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 392561' 00:21:42.631 Process pid: 392561 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 392561 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 392561 ']' 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.631 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:42.631 [2024-12-10 00:03:27.035512] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:21:42.631 [2024-12-10 00:03:27.035563] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.934 [2024-12-10 00:03:27.123743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:42.934 [2024-12-10 00:03:27.163494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.934 [2024-12-10 00:03:27.163547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.934 [2024-12-10 00:03:27.163557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.934 [2024-12-10 00:03:27.163565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.934 [2024-12-10 00:03:27.163572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.934 [2024-12-10 00:03:27.164967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.934 [2024-12-10 00:03:27.165077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.934 [2024-12-10 00:03:27.165079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.521 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.521 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:21:43.521 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:44.510 malloc0 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.510 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:21:44.782 00:21:44.782 00:21:44.782 CUnit - A unit testing framework for C - Version 2.1-3 00:21:44.782 http://cunit.sourceforge.net/ 00:21:44.782 00:21:44.782 00:21:44.782 Suite: nvme_compliance 00:21:44.782 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 00:03:29.134340] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:44.782 [2024-12-10 00:03:29.135713] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:21:44.782 [2024-12-10 00:03:29.135731] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:21:44.782 [2024-12-10 00:03:29.135739] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:21:44.782 [2024-12-10 00:03:29.137357] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:44.782 passed 00:21:44.782 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 00:03:29.215974] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:44.782 [2024-12-10 00:03:29.218997] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:44.782 passed 00:21:45.092 Test: admin_identify_ns ...[2024-12-10 00:03:29.301034] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.092 [2024-12-10 00:03:29.361838] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:45.092 [2024-12-10 00:03:29.369845] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:45.092 [2024-12-10 00:03:29.390930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.092 passed 00:21:45.092 Test: admin_get_features_mandatory_features ...[2024-12-10 00:03:29.463527] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.092 [2024-12-10 00:03:29.466546] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.092 passed 00:21:45.092 Test: admin_get_features_optional_features ...[2024-12-10 00:03:29.543094] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.370 [2024-12-10 00:03:29.546109] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.370 passed 00:21:45.370 Test: admin_set_features_number_of_queues ...[2024-12-10 00:03:29.623685] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.370 [2024-12-10 00:03:29.728928] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.370 passed 00:21:45.370 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 00:03:29.804542] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.370 [2024-12-10 00:03:29.807562] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.370 passed 00:21:45.632 Test: admin_get_log_page_with_lpo ...[2024-12-10 00:03:29.881095] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.632 [2024-12-10 00:03:29.949838] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:21:45.632 [2024-12-10 00:03:29.962900] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.632 passed 00:21:45.632 Test: fabric_property_get ...[2024-12-10 00:03:30.036388] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.632 [2024-12-10 00:03:30.037652] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:21:45.632 [2024-12-10 00:03:30.039417] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.632 passed 00:21:45.896 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 00:03:30.122998] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.896 [2024-12-10 00:03:30.124254] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:21:45.896 [2024-12-10 00:03:30.126021] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.896 passed 00:21:45.896 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 00:03:30.202762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:45.896 [2024-12-10 00:03:30.286836] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:45.896 [2024-12-10 00:03:30.302833] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:45.896 [2024-12-10 00:03:30.308002] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:45.896 passed 00:21:46.166 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 00:03:30.379638] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:46.166 [2024-12-10 00:03:30.380884] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:21:46.166 [2024-12-10 00:03:30.385675] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:46.166 passed 00:21:46.166 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 00:03:30.456352] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:46.166 [2024-12-10 00:03:30.532832] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:46.166 [2024-12-10 00:03:30.556838] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:46.166 [2024-12-10 00:03:30.561916] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:46.166 passed 00:21:46.166 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 00:03:30.635502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:46.468 [2024-12-10 00:03:30.636750] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:21:46.468 [2024-12-10 00:03:30.636778] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:21:46.468 [2024-12-10 00:03:30.638524] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:46.468 passed 00:21:46.468 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 00:03:30.715065] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:46.468 [2024-12-10 00:03:30.807832] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:21:46.468 [2024-12-10 00:03:30.815833] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:21:46.468 [2024-12-10 00:03:30.823834] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:21:46.468 [2024-12-10 00:03:30.831828] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:21:46.468 [2024-12-10 00:03:30.860909] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:46.468 passed 00:21:46.468 Test: admin_create_io_sq_verify_pc ...[2024-12-10 00:03:30.934348] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:46.745 [2024-12-10 00:03:30.949841] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:21:46.745 [2024-12-10 00:03:30.967515] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:46.745 passed 00:21:46.745 Test: admin_create_io_qp_max_qps ...[2024-12-10 00:03:31.044074] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:47.737 [2024-12-10 00:03:32.145835] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:21:48.328 [2024-12-10 00:03:32.530769] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:48.328 passed 00:21:48.328 Test: admin_create_io_sq_shared_cq ...[2024-12-10 00:03:32.606501] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:48.328 [2024-12-10 00:03:32.738834] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:48.328 [2024-12-10 00:03:32.775885] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:48.597 passed 00:21:48.597 00:21:48.597 Run Summary: Type Total Ran Passed Failed Inactive 00:21:48.597 suites 1 1 n/a 0 0 00:21:48.597 tests 18 18 18 0 0 00:21:48.597 asserts 360 360 360 0 n/a 00:21:48.597 00:21:48.597 Elapsed time = 1.498 seconds 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 392561 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 392561 ']' 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 392561 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392561 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392561' 00:21:48.597 killing process with pid 392561 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 392561 00:21:48.597 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 392561 00:21:48.597 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:48.888 00:21:48.888 real 0m6.319s 00:21:48.888 user 0m17.720s 00:21:48.888 sys 0m0.758s 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.888 ************************************ 00:21:48.888 END TEST nvmf_vfio_user_nvme_compliance 00:21:48.888 ************************************ 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.888 ************************************ 00:21:48.888 START TEST nvmf_vfio_user_fuzz 00:21:48.888 ************************************ 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:48.888 * Looking for test storage... 00:21:48.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.888 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:48.889 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.155 --rc genhtml_branch_coverage=1 00:21:49.155 --rc genhtml_function_coverage=1 00:21:49.155 --rc genhtml_legend=1 00:21:49.155 --rc geninfo_all_blocks=1 00:21:49.155 --rc geninfo_unexecuted_blocks=1 00:21:49.155 00:21:49.155 ' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.155 --rc genhtml_branch_coverage=1 00:21:49.155 --rc genhtml_function_coverage=1 00:21:49.155 --rc genhtml_legend=1 00:21:49.155 --rc geninfo_all_blocks=1 00:21:49.155 --rc geninfo_unexecuted_blocks=1 00:21:49.155 00:21:49.155 ' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.155 --rc genhtml_branch_coverage=1 00:21:49.155 --rc genhtml_function_coverage=1 00:21:49.155 --rc genhtml_legend=1 00:21:49.155 --rc geninfo_all_blocks=1 00:21:49.155 --rc geninfo_unexecuted_blocks=1 00:21:49.155 00:21:49.155 ' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:49.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.155 --rc genhtml_branch_coverage=1 00:21:49.155 --rc genhtml_function_coverage=1 00:21:49.155 --rc genhtml_legend=1 00:21:49.155 --rc geninfo_all_blocks=1 00:21:49.155 --rc geninfo_unexecuted_blocks=1 00:21:49.155 00:21:49.155 ' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=393708 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 393708' 00:21:49.155 Process pid: 393708 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 393708 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 393708 ']' 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.155 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:50.154 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.154 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:21:50.154 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 malloc0 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:51.146 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:22:23.228 Fuzzing completed. Shutting down the fuzz application 00:22:23.228 00:22:23.228 Dumping successful admin opcodes: 00:22:23.228 9, 10, 00:22:23.228 Dumping successful io opcodes: 00:22:23.228 0, 00:22:23.228 NS: 0x20000081ef00 I/O qp, Total commands completed: 858536, total successful commands: 3333, random_seed: 2605399104 00:22:23.228 NS: 0x20000081ef00 admin qp, Total commands completed: 205984, total successful commands: 48, random_seed: 2163927488 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 393708 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 393708 ']' 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 393708 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393708 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.228 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.229 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393708' 00:22:23.229 killing process with pid 393708 00:22:23.229 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 393708 00:22:23.229 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 393708 00:22:23.229 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:22:23.229 00:22:23.229 real 0m32.920s 00:22:23.229 user 0m29.473s 00:22:23.229 sys 0m32.592s 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:23.229 ************************************ 00:22:23.229 END TEST nvmf_vfio_user_fuzz 00:22:23.229 ************************************ 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.229 ************************************ 00:22:23.229 START TEST nvmf_auth_target 00:22:23.229 ************************************ 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:23.229 * Looking for test storage... 00:22:23.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.229 --rc genhtml_branch_coverage=1 00:22:23.229 --rc genhtml_function_coverage=1 00:22:23.229 --rc genhtml_legend=1 00:22:23.229 --rc geninfo_all_blocks=1 00:22:23.229 --rc geninfo_unexecuted_blocks=1 00:22:23.229 00:22:23.229 ' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.229 --rc genhtml_branch_coverage=1 00:22:23.229 --rc genhtml_function_coverage=1 00:22:23.229 --rc genhtml_legend=1 00:22:23.229 --rc geninfo_all_blocks=1 00:22:23.229 --rc geninfo_unexecuted_blocks=1 00:22:23.229 00:22:23.229 ' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.229 --rc genhtml_branch_coverage=1 00:22:23.229 --rc genhtml_function_coverage=1 00:22:23.229 --rc genhtml_legend=1 00:22:23.229 --rc geninfo_all_blocks=1 00:22:23.229 --rc geninfo_unexecuted_blocks=1 00:22:23.229 00:22:23.229 ' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.229 --rc genhtml_branch_coverage=1 00:22:23.229 --rc genhtml_function_coverage=1 00:22:23.229 --rc genhtml_legend=1 00:22:23.229 --rc geninfo_all_blocks=1 00:22:23.229 --rc geninfo_unexecuted_blocks=1 00:22:23.229 00:22:23.229 ' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.229 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.230 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.818 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:29.818 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:29.819 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:29.819 Found net devices under 0000:af:00.0: cvl_0_0 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:29.819 Found net devices under 0000:af:00.1: cvl_0_1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:22:29.819 00:22:29.819 --- 10.0.0.2 ping statistics --- 00:22:29.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.819 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:29.819 00:22:29.819 --- 10.0.0.1 ping statistics --- 00:22:29.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.819 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=402454 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 402454 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 402454 ']' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.819 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=402648 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=54cb1a21be771075f15c47fcd0d015aa24ab447795be235e 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hrd 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 54cb1a21be771075f15c47fcd0d015aa24ab447795be235e 0 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 54cb1a21be771075f15c47fcd0d015aa24ab447795be235e 0 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.395 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=54cb1a21be771075f15c47fcd0d015aa24ab447795be235e 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hrd 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hrd 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hrd 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=79e007adf4fdf5dce08d073453158b9a0df5d3bd11871f2a71d87bcd55679fc4 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JJ0 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 79e007adf4fdf5dce08d073453158b9a0df5d3bd11871f2a71d87bcd55679fc4 3 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 79e007adf4fdf5dce08d073453158b9a0df5d3bd11871f2a71d87bcd55679fc4 3 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=79e007adf4fdf5dce08d073453158b9a0df5d3bd11871f2a71d87bcd55679fc4 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JJ0 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JJ0 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.JJ0 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=71fdeb9148e2dc2b03427c6ef51e81b7 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9Xt 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 71fdeb9148e2dc2b03427c6ef51e81b7 1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 71fdeb9148e2dc2b03427c6ef51e81b7 1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=71fdeb9148e2dc2b03427c6ef51e81b7 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:22:30.396 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9Xt 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9Xt 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.9Xt 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f1c9ebf7c3daaf30f8226478d246c6043faefadb523a8033 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HjC 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f1c9ebf7c3daaf30f8226478d246c6043faefadb523a8033 2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f1c9ebf7c3daaf30f8226478d246c6043faefadb523a8033 2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f1c9ebf7c3daaf30f8226478d246c6043faefadb523a8033 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HjC 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HjC 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.HjC 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f897c4f96cd8f93bb65030f0b8be3a0c5bfeee06f932b92 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VF4 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f897c4f96cd8f93bb65030f0b8be3a0c5bfeee06f932b92 2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f897c4f96cd8f93bb65030f0b8be3a0c5bfeee06f932b92 2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f897c4f96cd8f93bb65030f0b8be3a0c5bfeee06f932b92 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:22:30.660 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VF4 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VF4 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.VF4 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a3c86d938b4e44bcaa0c10b42bf6ee52 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nb6 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a3c86d938b4e44bcaa0c10b42bf6ee52 1 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a3c86d938b4e44bcaa0c10b42bf6ee52 1 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a3c86d938b4e44bcaa0c10b42bf6ee52 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nb6 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nb6 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Nb6 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eaedba95041e6c1efa242aa67fb645a1940c2de0745619fb588ec20cb33857ac 00:22:30.660 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KKa 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eaedba95041e6c1efa242aa67fb645a1940c2de0745619fb588ec20cb33857ac 3 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eaedba95041e6c1efa242aa67fb645a1940c2de0745619fb588ec20cb33857ac 3 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eaedba95041e6c1efa242aa67fb645a1940c2de0745619fb588ec20cb33857ac 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:22:30.661 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KKa 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KKa 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.KKa 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 402454 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 402454 ']' 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 402648 /var/tmp/host.sock 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 402648 ']' 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.919 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hrd 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hrd 00:22:31.178 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hrd 00:22:31.454 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.JJ0 ]] 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JJ0 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JJ0 00:22:31.455 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JJ0 00:22:31.748 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:31.748 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9Xt 00:22:31.748 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.748 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9Xt 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9Xt 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.HjC ]] 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HjC 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HjC 00:22:31.748 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HjC 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VF4 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.VF4 00:22:32.030 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.VF4 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Nb6 ]] 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nb6 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nb6 00:22:32.288 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nb6 00:22:32.546 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:32.546 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KKa 00:22:32.546 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KKa 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KKa 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:32.547 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.805 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.806 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.806 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.063 00:22:33.063 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.063 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.063 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.321 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.321 { 00:22:33.321 "cntlid": 1, 00:22:33.322 "qid": 0, 00:22:33.322 "state": "enabled", 00:22:33.322 "thread": "nvmf_tgt_poll_group_000", 00:22:33.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:33.322 "listen_address": { 00:22:33.322 "trtype": "TCP", 00:22:33.322 "adrfam": "IPv4", 00:22:33.322 "traddr": "10.0.0.2", 00:22:33.322 "trsvcid": "4420" 00:22:33.322 }, 00:22:33.322 "peer_address": { 00:22:33.322 "trtype": "TCP", 00:22:33.322 "adrfam": "IPv4", 00:22:33.322 "traddr": "10.0.0.1", 00:22:33.322 "trsvcid": "42724" 00:22:33.322 }, 00:22:33.322 "auth": { 00:22:33.322 "state": "completed", 00:22:33.322 "digest": "sha256", 00:22:33.322 "dhgroup": "null" 00:22:33.322 } 00:22:33.322 } 00:22:33.322 ]' 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.322 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.580 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:33.580 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:36.861 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.122 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.123 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.382 00:22:37.382 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.382 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.382 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.641 { 00:22:37.641 "cntlid": 3, 00:22:37.641 "qid": 0, 00:22:37.641 "state": "enabled", 00:22:37.641 "thread": "nvmf_tgt_poll_group_000", 00:22:37.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:37.641 "listen_address": { 00:22:37.641 "trtype": "TCP", 00:22:37.641 "adrfam": "IPv4", 00:22:37.641 "traddr": "10.0.0.2", 00:22:37.641 "trsvcid": "4420" 00:22:37.641 }, 00:22:37.641 "peer_address": { 00:22:37.641 "trtype": "TCP", 00:22:37.641 "adrfam": "IPv4", 00:22:37.641 "traddr": "10.0.0.1", 00:22:37.641 "trsvcid": "42768" 00:22:37.641 }, 00:22:37.641 "auth": { 00:22:37.641 "state": "completed", 00:22:37.641 "digest": "sha256", 00:22:37.641 "dhgroup": "null" 00:22:37.641 } 00:22:37.641 } 00:22:37.641 ]' 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:37.641 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.641 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.641 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.641 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.899 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:37.899 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:38.464 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:38.465 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.722 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.723 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.981 00:22:38.981 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.981 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.981 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.238 { 00:22:39.238 "cntlid": 5, 00:22:39.238 "qid": 0, 00:22:39.238 "state": "enabled", 00:22:39.238 "thread": "nvmf_tgt_poll_group_000", 00:22:39.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:39.238 "listen_address": { 00:22:39.238 "trtype": "TCP", 00:22:39.238 "adrfam": "IPv4", 00:22:39.238 "traddr": "10.0.0.2", 00:22:39.238 "trsvcid": "4420" 00:22:39.238 }, 00:22:39.238 "peer_address": { 00:22:39.238 "trtype": "TCP", 00:22:39.238 "adrfam": "IPv4", 00:22:39.238 "traddr": "10.0.0.1", 00:22:39.238 "trsvcid": "42790" 00:22:39.238 }, 00:22:39.238 "auth": { 00:22:39.238 "state": "completed", 00:22:39.238 "digest": "sha256", 00:22:39.238 "dhgroup": "null" 00:22:39.238 } 00:22:39.238 } 00:22:39.238 ]' 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.238 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.496 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:39.496 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:40.063 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.321 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.580 00:22:40.580 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.580 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.580 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.580 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.580 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.580 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.580 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.845 { 00:22:40.845 "cntlid": 7, 00:22:40.845 "qid": 0, 00:22:40.845 "state": "enabled", 00:22:40.845 "thread": "nvmf_tgt_poll_group_000", 00:22:40.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:40.845 "listen_address": { 00:22:40.845 "trtype": "TCP", 00:22:40.845 "adrfam": "IPv4", 00:22:40.845 "traddr": "10.0.0.2", 00:22:40.845 "trsvcid": "4420" 00:22:40.845 }, 00:22:40.845 "peer_address": { 00:22:40.845 "trtype": "TCP", 00:22:40.845 "adrfam": "IPv4", 00:22:40.845 "traddr": "10.0.0.1", 00:22:40.845 "trsvcid": "42828" 00:22:40.845 }, 00:22:40.845 "auth": { 00:22:40.845 "state": "completed", 00:22:40.845 "digest": "sha256", 00:22:40.845 "dhgroup": "null" 00:22:40.845 } 00:22:40.845 } 00:22:40.845 ]' 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.845 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.105 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:41.105 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:41.670 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.670 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.928 00:22:41.928 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.928 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.928 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.186 { 00:22:42.186 "cntlid": 9, 00:22:42.186 "qid": 0, 00:22:42.186 "state": "enabled", 00:22:42.186 "thread": "nvmf_tgt_poll_group_000", 00:22:42.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:42.186 "listen_address": { 00:22:42.186 "trtype": "TCP", 00:22:42.186 "adrfam": "IPv4", 00:22:42.186 "traddr": "10.0.0.2", 00:22:42.186 "trsvcid": "4420" 00:22:42.186 }, 00:22:42.186 "peer_address": { 00:22:42.186 "trtype": "TCP", 00:22:42.186 "adrfam": "IPv4", 00:22:42.186 "traddr": "10.0.0.1", 00:22:42.186 "trsvcid": "42974" 00:22:42.186 }, 00:22:42.186 "auth": { 00:22:42.186 "state": "completed", 00:22:42.186 "digest": "sha256", 00:22:42.186 "dhgroup": "ffdhe2048" 00:22:42.186 } 00:22:42.186 } 00:22:42.186 ]' 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:42.186 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.442 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:42.700 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:43.266 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.267 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.526 00:22:43.526 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.526 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.526 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.783 { 00:22:43.783 "cntlid": 11, 00:22:43.783 "qid": 0, 00:22:43.783 "state": "enabled", 00:22:43.783 "thread": "nvmf_tgt_poll_group_000", 00:22:43.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:43.783 "listen_address": { 00:22:43.783 "trtype": "TCP", 00:22:43.783 "adrfam": "IPv4", 00:22:43.783 "traddr": "10.0.0.2", 00:22:43.783 "trsvcid": "4420" 00:22:43.783 }, 00:22:43.783 "peer_address": { 00:22:43.783 "trtype": "TCP", 00:22:43.783 "adrfam": "IPv4", 00:22:43.783 "traddr": "10.0.0.1", 00:22:43.783 "trsvcid": "43000" 00:22:43.783 }, 00:22:43.783 "auth": { 00:22:43.783 "state": "completed", 00:22:43.783 "digest": "sha256", 00:22:43.783 "dhgroup": "ffdhe2048" 00:22:43.783 } 00:22:43.783 } 00:22:43.783 ]' 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:43.783 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.040 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.040 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.040 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.040 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:44.040 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:44.605 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.862 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.120 00:22:45.120 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.120 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.120 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.378 { 00:22:45.378 "cntlid": 13, 00:22:45.378 "qid": 0, 00:22:45.378 "state": "enabled", 00:22:45.378 "thread": "nvmf_tgt_poll_group_000", 00:22:45.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:45.378 "listen_address": { 00:22:45.378 "trtype": "TCP", 00:22:45.378 "adrfam": "IPv4", 00:22:45.378 "traddr": "10.0.0.2", 00:22:45.378 "trsvcid": "4420" 00:22:45.378 }, 00:22:45.378 "peer_address": { 00:22:45.378 "trtype": "TCP", 00:22:45.378 "adrfam": "IPv4", 00:22:45.378 "traddr": "10.0.0.1", 00:22:45.378 "trsvcid": "43022" 00:22:45.378 }, 00:22:45.378 "auth": { 00:22:45.378 "state": "completed", 00:22:45.378 "digest": "sha256", 00:22:45.378 "dhgroup": "ffdhe2048" 00:22:45.378 } 00:22:45.378 } 00:22:45.378 ]' 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.378 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.379 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.379 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.637 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:45.637 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.204 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.462 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.721 00:22:46.721 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.721 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.721 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.978 { 00:22:46.978 "cntlid": 15, 00:22:46.978 "qid": 0, 00:22:46.978 "state": "enabled", 00:22:46.978 "thread": "nvmf_tgt_poll_group_000", 00:22:46.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:46.978 "listen_address": { 00:22:46.978 "trtype": "TCP", 00:22:46.978 "adrfam": "IPv4", 00:22:46.978 "traddr": "10.0.0.2", 00:22:46.978 "trsvcid": "4420" 00:22:46.978 }, 00:22:46.978 "peer_address": { 00:22:46.978 "trtype": "TCP", 00:22:46.978 "adrfam": "IPv4", 00:22:46.978 "traddr": "10.0.0.1", 00:22:46.978 "trsvcid": "43052" 00:22:46.978 }, 00:22:46.978 "auth": { 00:22:46.978 "state": "completed", 00:22:46.978 "digest": "sha256", 00:22:46.978 "dhgroup": "ffdhe2048" 00:22:46.978 } 00:22:46.978 } 00:22:46.978 ]' 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.978 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.235 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:47.235 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.800 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.058 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.316 00:22:48.316 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.316 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.316 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.573 { 00:22:48.573 "cntlid": 17, 00:22:48.573 "qid": 0, 00:22:48.573 "state": "enabled", 00:22:48.573 "thread": "nvmf_tgt_poll_group_000", 00:22:48.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:48.573 "listen_address": { 00:22:48.573 "trtype": "TCP", 00:22:48.573 "adrfam": "IPv4", 00:22:48.573 "traddr": "10.0.0.2", 00:22:48.573 "trsvcid": "4420" 00:22:48.573 }, 00:22:48.573 "peer_address": { 00:22:48.573 "trtype": "TCP", 00:22:48.573 "adrfam": "IPv4", 00:22:48.573 "traddr": "10.0.0.1", 00:22:48.573 "trsvcid": "43076" 00:22:48.573 }, 00:22:48.573 "auth": { 00:22:48.573 "state": "completed", 00:22:48.573 "digest": "sha256", 00:22:48.573 "dhgroup": "ffdhe3072" 00:22:48.573 } 00:22:48.573 } 00:22:48.573 ]' 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:48.573 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.574 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.574 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.574 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.831 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:48.831 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.398 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.656 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.913 00:22:49.913 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.913 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.913 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.170 { 00:22:50.170 "cntlid": 19, 00:22:50.170 "qid": 0, 00:22:50.170 "state": "enabled", 00:22:50.170 "thread": "nvmf_tgt_poll_group_000", 00:22:50.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:50.170 "listen_address": { 00:22:50.170 "trtype": "TCP", 00:22:50.170 "adrfam": "IPv4", 00:22:50.170 "traddr": "10.0.0.2", 00:22:50.170 "trsvcid": "4420" 00:22:50.170 }, 00:22:50.170 "peer_address": { 00:22:50.170 "trtype": "TCP", 00:22:50.170 "adrfam": "IPv4", 00:22:50.170 "traddr": "10.0.0.1", 00:22:50.170 "trsvcid": "43114" 00:22:50.170 }, 00:22:50.170 "auth": { 00:22:50.170 "state": "completed", 00:22:50.170 "digest": "sha256", 00:22:50.170 "dhgroup": "ffdhe3072" 00:22:50.170 } 00:22:50.170 } 00:22:50.170 ]' 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.170 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.427 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:50.428 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:50.993 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.259 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.259 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.519 { 00:22:51.519 "cntlid": 21, 00:22:51.519 "qid": 0, 00:22:51.519 "state": "enabled", 00:22:51.519 "thread": "nvmf_tgt_poll_group_000", 00:22:51.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:51.519 "listen_address": { 00:22:51.519 "trtype": "TCP", 00:22:51.519 "adrfam": "IPv4", 00:22:51.519 "traddr": "10.0.0.2", 00:22:51.519 "trsvcid": "4420" 00:22:51.519 }, 00:22:51.519 "peer_address": { 00:22:51.519 "trtype": "TCP", 00:22:51.519 "adrfam": "IPv4", 00:22:51.519 "traddr": "10.0.0.1", 00:22:51.519 "trsvcid": "43140" 00:22:51.519 }, 00:22:51.519 "auth": { 00:22:51.519 "state": "completed", 00:22:51.519 "digest": "sha256", 00:22:51.519 "dhgroup": "ffdhe3072" 00:22:51.519 } 00:22:51.519 } 00:22:51.519 ]' 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:51.519 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.777 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:51.777 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.777 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.777 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.777 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.777 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:51.777 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:52.341 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:52.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.600 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.857 00:22:52.857 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.857 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.857 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.115 { 00:22:53.115 "cntlid": 23, 00:22:53.115 "qid": 0, 00:22:53.115 "state": "enabled", 00:22:53.115 "thread": "nvmf_tgt_poll_group_000", 00:22:53.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:53.115 "listen_address": { 00:22:53.115 "trtype": "TCP", 00:22:53.115 "adrfam": "IPv4", 00:22:53.115 "traddr": "10.0.0.2", 00:22:53.115 "trsvcid": "4420" 00:22:53.115 }, 00:22:53.115 "peer_address": { 00:22:53.115 "trtype": "TCP", 00:22:53.115 "adrfam": "IPv4", 00:22:53.115 "traddr": "10.0.0.1", 00:22:53.115 "trsvcid": "53328" 00:22:53.115 }, 00:22:53.115 "auth": { 00:22:53.115 "state": "completed", 00:22:53.115 "digest": "sha256", 00:22:53.115 "dhgroup": "ffdhe3072" 00:22:53.115 } 00:22:53.115 } 00:22:53.115 ]' 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.115 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.373 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:53.373 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.373 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.373 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.373 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.631 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:53.631 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:54.197 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.198 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.455 00:22:54.455 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.455 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.455 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.712 { 00:22:54.712 "cntlid": 25, 00:22:54.712 "qid": 0, 00:22:54.712 "state": "enabled", 00:22:54.712 "thread": "nvmf_tgt_poll_group_000", 00:22:54.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:54.712 "listen_address": { 00:22:54.712 "trtype": "TCP", 00:22:54.712 "adrfam": "IPv4", 00:22:54.712 "traddr": "10.0.0.2", 00:22:54.712 "trsvcid": "4420" 00:22:54.712 }, 00:22:54.712 "peer_address": { 00:22:54.712 "trtype": "TCP", 00:22:54.712 "adrfam": "IPv4", 00:22:54.712 "traddr": "10.0.0.1", 00:22:54.712 "trsvcid": "53360" 00:22:54.712 }, 00:22:54.712 "auth": { 00:22:54.712 "state": "completed", 00:22:54.712 "digest": "sha256", 00:22:54.712 "dhgroup": "ffdhe4096" 00:22:54.712 } 00:22:54.712 } 00:22:54.712 ]' 00:22:54.712 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.713 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:54.713 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.713 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.713 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.971 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.971 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.971 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.971 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:54.971 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:55.541 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.798 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.056 00:22:56.056 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.056 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.056 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.314 { 00:22:56.314 "cntlid": 27, 00:22:56.314 "qid": 0, 00:22:56.314 "state": "enabled", 00:22:56.314 "thread": "nvmf_tgt_poll_group_000", 00:22:56.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:56.314 "listen_address": { 00:22:56.314 "trtype": "TCP", 00:22:56.314 "adrfam": "IPv4", 00:22:56.314 "traddr": "10.0.0.2", 00:22:56.314 "trsvcid": "4420" 00:22:56.314 }, 00:22:56.314 "peer_address": { 00:22:56.314 "trtype": "TCP", 00:22:56.314 "adrfam": "IPv4", 00:22:56.314 "traddr": "10.0.0.1", 00:22:56.314 "trsvcid": "53382" 00:22:56.314 }, 00:22:56.314 "auth": { 00:22:56.314 "state": "completed", 00:22:56.314 "digest": "sha256", 00:22:56.314 "dhgroup": "ffdhe4096" 00:22:56.314 } 00:22:56.314 } 00:22:56.314 ]' 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.314 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:56.315 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.315 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:56.315 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.572 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.572 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.572 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.572 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:56.572 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.138 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.396 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.654 00:22:57.654 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.654 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.654 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.911 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.911 { 00:22:57.911 "cntlid": 29, 00:22:57.911 "qid": 0, 00:22:57.911 "state": "enabled", 00:22:57.911 "thread": "nvmf_tgt_poll_group_000", 00:22:57.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:57.911 "listen_address": { 00:22:57.911 "trtype": "TCP", 00:22:57.911 "adrfam": "IPv4", 00:22:57.911 "traddr": "10.0.0.2", 00:22:57.911 "trsvcid": "4420" 00:22:57.911 }, 00:22:57.911 "peer_address": { 00:22:57.911 "trtype": "TCP", 00:22:57.911 "adrfam": "IPv4", 00:22:57.911 "traddr": "10.0.0.1", 00:22:57.911 "trsvcid": "53408" 00:22:57.911 }, 00:22:57.911 "auth": { 00:22:57.911 "state": "completed", 00:22:57.911 "digest": "sha256", 00:22:57.911 "dhgroup": "ffdhe4096" 00:22:57.911 } 00:22:57.911 } 00:22:57.912 ]' 00:22:57.912 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.912 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.912 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.912 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:57.912 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.170 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.170 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.170 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.170 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:58.170 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:58.736 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.995 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.253 00:22:59.253 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.253 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.253 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.511 { 00:22:59.511 "cntlid": 31, 00:22:59.511 "qid": 0, 00:22:59.511 "state": "enabled", 00:22:59.511 "thread": "nvmf_tgt_poll_group_000", 00:22:59.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:22:59.511 "listen_address": { 00:22:59.511 "trtype": "TCP", 00:22:59.511 "adrfam": "IPv4", 00:22:59.511 "traddr": "10.0.0.2", 00:22:59.511 "trsvcid": "4420" 00:22:59.511 }, 00:22:59.511 "peer_address": { 00:22:59.511 "trtype": "TCP", 00:22:59.511 "adrfam": "IPv4", 00:22:59.511 "traddr": "10.0.0.1", 00:22:59.511 "trsvcid": "53450" 00:22:59.511 }, 00:22:59.511 "auth": { 00:22:59.511 "state": "completed", 00:22:59.511 "digest": "sha256", 00:22:59.511 "dhgroup": "ffdhe4096" 00:22:59.511 } 00:22:59.511 } 00:22:59.511 ]' 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.511 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.769 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:22:59.769 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.337 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.595 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.596 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.596 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.596 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.596 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.596 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.852 00:23:00.852 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.852 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.852 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.109 { 00:23:01.109 "cntlid": 33, 00:23:01.109 "qid": 0, 00:23:01.109 "state": "enabled", 00:23:01.109 "thread": "nvmf_tgt_poll_group_000", 00:23:01.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:01.109 "listen_address": { 00:23:01.109 "trtype": "TCP", 00:23:01.109 "adrfam": "IPv4", 00:23:01.109 "traddr": "10.0.0.2", 00:23:01.109 "trsvcid": "4420" 00:23:01.109 }, 00:23:01.109 "peer_address": { 00:23:01.109 "trtype": "TCP", 00:23:01.109 "adrfam": "IPv4", 00:23:01.109 "traddr": "10.0.0.1", 00:23:01.109 "trsvcid": "53480" 00:23:01.109 }, 00:23:01.109 "auth": { 00:23:01.109 "state": "completed", 00:23:01.109 "digest": "sha256", 00:23:01.109 "dhgroup": "ffdhe6144" 00:23:01.109 } 00:23:01.109 } 00:23:01.109 ]' 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.109 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.367 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.367 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.367 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.367 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:01.367 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:01.933 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.191 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.449 00:23:02.449 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.449 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.449 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.708 { 00:23:02.708 "cntlid": 35, 00:23:02.708 "qid": 0, 00:23:02.708 "state": "enabled", 00:23:02.708 "thread": "nvmf_tgt_poll_group_000", 00:23:02.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:02.708 "listen_address": { 00:23:02.708 "trtype": "TCP", 00:23:02.708 "adrfam": "IPv4", 00:23:02.708 "traddr": "10.0.0.2", 00:23:02.708 "trsvcid": "4420" 00:23:02.708 }, 00:23:02.708 "peer_address": { 00:23:02.708 "trtype": "TCP", 00:23:02.708 "adrfam": "IPv4", 00:23:02.708 "traddr": "10.0.0.1", 00:23:02.708 "trsvcid": "40592" 00:23:02.708 }, 00:23:02.708 "auth": { 00:23:02.708 "state": "completed", 00:23:02.708 "digest": "sha256", 00:23:02.708 "dhgroup": "ffdhe6144" 00:23:02.708 } 00:23:02.708 } 00:23:02.708 ]' 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.708 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:02.967 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:03.532 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.790 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.358 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.358 { 00:23:04.358 "cntlid": 37, 00:23:04.358 "qid": 0, 00:23:04.358 "state": "enabled", 00:23:04.358 "thread": "nvmf_tgt_poll_group_000", 00:23:04.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:04.358 "listen_address": { 00:23:04.358 "trtype": "TCP", 00:23:04.358 "adrfam": "IPv4", 00:23:04.358 "traddr": "10.0.0.2", 00:23:04.358 "trsvcid": "4420" 00:23:04.358 }, 00:23:04.358 "peer_address": { 00:23:04.358 "trtype": "TCP", 00:23:04.358 "adrfam": "IPv4", 00:23:04.358 "traddr": "10.0.0.1", 00:23:04.358 "trsvcid": "40608" 00:23:04.358 }, 00:23:04.358 "auth": { 00:23:04.358 "state": "completed", 00:23:04.358 "digest": "sha256", 00:23:04.358 "dhgroup": "ffdhe6144" 00:23:04.358 } 00:23:04.358 } 00:23:04.358 ]' 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.358 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.618 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:04.618 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.618 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.618 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.618 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.618 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:04.618 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.184 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.442 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.699 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.957 { 00:23:05.957 "cntlid": 39, 00:23:05.957 "qid": 0, 00:23:05.957 "state": "enabled", 00:23:05.957 "thread": "nvmf_tgt_poll_group_000", 00:23:05.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:05.957 "listen_address": { 00:23:05.957 "trtype": "TCP", 00:23:05.957 "adrfam": "IPv4", 00:23:05.957 "traddr": "10.0.0.2", 00:23:05.957 "trsvcid": "4420" 00:23:05.957 }, 00:23:05.957 "peer_address": { 00:23:05.957 "trtype": "TCP", 00:23:05.957 "adrfam": "IPv4", 00:23:05.957 "traddr": "10.0.0.1", 00:23:05.957 "trsvcid": "40646" 00:23:05.957 }, 00:23:05.957 "auth": { 00:23:05.957 "state": "completed", 00:23:05.957 "digest": "sha256", 00:23:05.957 "dhgroup": "ffdhe6144" 00:23:05.957 } 00:23:05.957 } 00:23:05.957 ]' 00:23:05.957 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.214 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.470 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:06.470 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.043 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.609 00:23:07.609 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.609 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.609 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.867 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.867 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.868 { 00:23:07.868 "cntlid": 41, 00:23:07.868 "qid": 0, 00:23:07.868 "state": "enabled", 00:23:07.868 "thread": "nvmf_tgt_poll_group_000", 00:23:07.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:07.868 "listen_address": { 00:23:07.868 "trtype": "TCP", 00:23:07.868 "adrfam": "IPv4", 00:23:07.868 "traddr": "10.0.0.2", 00:23:07.868 "trsvcid": "4420" 00:23:07.868 }, 00:23:07.868 "peer_address": { 00:23:07.868 "trtype": "TCP", 00:23:07.868 "adrfam": "IPv4", 00:23:07.868 "traddr": "10.0.0.1", 00:23:07.868 "trsvcid": "40658" 00:23:07.868 }, 00:23:07.868 "auth": { 00:23:07.868 "state": "completed", 00:23:07.868 "digest": "sha256", 00:23:07.868 "dhgroup": "ffdhe8192" 00:23:07.868 } 00:23:07.868 } 00:23:07.868 ]' 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.868 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.126 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:08.126 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:08.690 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.691 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.949 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.514 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.514 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.514 { 00:23:09.514 "cntlid": 43, 00:23:09.514 "qid": 0, 00:23:09.514 "state": "enabled", 00:23:09.514 "thread": "nvmf_tgt_poll_group_000", 00:23:09.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:09.514 "listen_address": { 00:23:09.514 "trtype": "TCP", 00:23:09.514 "adrfam": "IPv4", 00:23:09.514 "traddr": "10.0.0.2", 00:23:09.514 "trsvcid": "4420" 00:23:09.514 }, 00:23:09.514 "peer_address": { 00:23:09.514 "trtype": "TCP", 00:23:09.514 "adrfam": "IPv4", 00:23:09.514 "traddr": "10.0.0.1", 00:23:09.514 "trsvcid": "40670" 00:23:09.514 }, 00:23:09.514 "auth": { 00:23:09.514 "state": "completed", 00:23:09.514 "digest": "sha256", 00:23:09.514 "dhgroup": "ffdhe8192" 00:23:09.514 } 00:23:09.514 } 00:23:09.514 ]' 00:23:09.771 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.771 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.029 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:10.029 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.596 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.859 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.860 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.860 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.860 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.126 00:23:11.126 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.126 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.126 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.385 { 00:23:11.385 "cntlid": 45, 00:23:11.385 "qid": 0, 00:23:11.385 "state": "enabled", 00:23:11.385 "thread": "nvmf_tgt_poll_group_000", 00:23:11.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:11.385 "listen_address": { 00:23:11.385 "trtype": "TCP", 00:23:11.385 "adrfam": "IPv4", 00:23:11.385 "traddr": "10.0.0.2", 00:23:11.385 "trsvcid": "4420" 00:23:11.385 }, 00:23:11.385 "peer_address": { 00:23:11.385 "trtype": "TCP", 00:23:11.385 "adrfam": "IPv4", 00:23:11.385 "traddr": "10.0.0.1", 00:23:11.385 "trsvcid": "40702" 00:23:11.385 }, 00:23:11.385 "auth": { 00:23:11.385 "state": "completed", 00:23:11.385 "digest": "sha256", 00:23:11.385 "dhgroup": "ffdhe8192" 00:23:11.385 } 00:23:11.385 } 00:23:11.385 ]' 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:11.385 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.643 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.643 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.643 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.643 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.643 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.901 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:11.901 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.467 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.468 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.034 00:23:13.034 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.034 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.034 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.291 { 00:23:13.291 "cntlid": 47, 00:23:13.291 "qid": 0, 00:23:13.291 "state": "enabled", 00:23:13.291 "thread": "nvmf_tgt_poll_group_000", 00:23:13.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:13.291 "listen_address": { 00:23:13.291 "trtype": "TCP", 00:23:13.291 "adrfam": "IPv4", 00:23:13.291 "traddr": "10.0.0.2", 00:23:13.291 "trsvcid": "4420" 00:23:13.291 }, 00:23:13.291 "peer_address": { 00:23:13.291 "trtype": "TCP", 00:23:13.291 "adrfam": "IPv4", 00:23:13.291 "traddr": "10.0.0.1", 00:23:13.291 "trsvcid": "45470" 00:23:13.291 }, 00:23:13.291 "auth": { 00:23:13.291 "state": "completed", 00:23:13.291 "digest": "sha256", 00:23:13.291 "dhgroup": "ffdhe8192" 00:23:13.291 } 00:23:13.291 } 00:23:13.291 ]' 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.291 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.292 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.292 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.292 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.550 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:13.550 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:14.116 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.374 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.632 00:23:14.632 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.632 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.632 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.890 { 00:23:14.890 "cntlid": 49, 00:23:14.890 "qid": 0, 00:23:14.890 "state": "enabled", 00:23:14.890 "thread": "nvmf_tgt_poll_group_000", 00:23:14.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:14.890 "listen_address": { 00:23:14.890 "trtype": "TCP", 00:23:14.890 "adrfam": "IPv4", 00:23:14.890 "traddr": "10.0.0.2", 00:23:14.890 "trsvcid": "4420" 00:23:14.890 }, 00:23:14.890 "peer_address": { 00:23:14.890 "trtype": "TCP", 00:23:14.890 "adrfam": "IPv4", 00:23:14.890 "traddr": "10.0.0.1", 00:23:14.890 "trsvcid": "45504" 00:23:14.890 }, 00:23:14.890 "auth": { 00:23:14.890 "state": "completed", 00:23:14.890 "digest": "sha384", 00:23:14.890 "dhgroup": "null" 00:23:14.890 } 00:23:14.890 } 00:23:14.890 ]' 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.890 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.148 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:15.148 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:15.714 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.714 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.972 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.230 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.230 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.488 { 00:23:16.488 "cntlid": 51, 00:23:16.488 "qid": 0, 00:23:16.488 "state": "enabled", 00:23:16.488 "thread": "nvmf_tgt_poll_group_000", 00:23:16.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:16.488 "listen_address": { 00:23:16.488 "trtype": "TCP", 00:23:16.488 "adrfam": "IPv4", 00:23:16.488 "traddr": "10.0.0.2", 00:23:16.488 "trsvcid": "4420" 00:23:16.488 }, 00:23:16.488 "peer_address": { 00:23:16.488 "trtype": "TCP", 00:23:16.488 "adrfam": "IPv4", 00:23:16.488 "traddr": "10.0.0.1", 00:23:16.488 "trsvcid": "45512" 00:23:16.488 }, 00:23:16.488 "auth": { 00:23:16.488 "state": "completed", 00:23:16.488 "digest": "sha384", 00:23:16.488 "dhgroup": "null" 00:23:16.488 } 00:23:16.488 } 00:23:16.488 ]' 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.488 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.747 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:16.747 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.313 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:17.314 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.573 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.844 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.105 { 00:23:18.105 "cntlid": 53, 00:23:18.105 "qid": 0, 00:23:18.105 "state": "enabled", 00:23:18.105 "thread": "nvmf_tgt_poll_group_000", 00:23:18.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:18.105 "listen_address": { 00:23:18.105 "trtype": "TCP", 00:23:18.105 "adrfam": "IPv4", 00:23:18.105 "traddr": "10.0.0.2", 00:23:18.105 "trsvcid": "4420" 00:23:18.105 }, 00:23:18.105 "peer_address": { 00:23:18.105 "trtype": "TCP", 00:23:18.105 "adrfam": "IPv4", 00:23:18.105 "traddr": "10.0.0.1", 00:23:18.105 "trsvcid": "45544" 00:23:18.105 }, 00:23:18.105 "auth": { 00:23:18.105 "state": "completed", 00:23:18.105 "digest": "sha384", 00:23:18.105 "dhgroup": "null" 00:23:18.105 } 00:23:18.105 } 00:23:18.105 ]' 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.105 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.364 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:18.364 00:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:18.929 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.187 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.187 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.446 { 00:23:19.446 "cntlid": 55, 00:23:19.446 "qid": 0, 00:23:19.446 "state": "enabled", 00:23:19.446 "thread": "nvmf_tgt_poll_group_000", 00:23:19.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:19.446 "listen_address": { 00:23:19.446 "trtype": "TCP", 00:23:19.446 "adrfam": "IPv4", 00:23:19.446 "traddr": "10.0.0.2", 00:23:19.446 "trsvcid": "4420" 00:23:19.446 }, 00:23:19.446 "peer_address": { 00:23:19.446 "trtype": "TCP", 00:23:19.446 "adrfam": "IPv4", 00:23:19.446 "traddr": "10.0.0.1", 00:23:19.446 "trsvcid": "45578" 00:23:19.446 }, 00:23:19.446 "auth": { 00:23:19.446 "state": "completed", 00:23:19.446 "digest": "sha384", 00:23:19.446 "dhgroup": "null" 00:23:19.446 } 00:23:19.446 } 00:23:19.446 ]' 00:23:19.446 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.704 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:19.704 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.704 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:19.704 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.704 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.704 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.704 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.961 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:19.961 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.528 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.796 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.796 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.054 { 00:23:21.054 "cntlid": 57, 00:23:21.054 "qid": 0, 00:23:21.054 "state": "enabled", 00:23:21.054 "thread": "nvmf_tgt_poll_group_000", 00:23:21.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:21.054 "listen_address": { 00:23:21.054 "trtype": "TCP", 00:23:21.054 "adrfam": "IPv4", 00:23:21.054 "traddr": "10.0.0.2", 00:23:21.054 "trsvcid": "4420" 00:23:21.054 }, 00:23:21.054 "peer_address": { 00:23:21.054 "trtype": "TCP", 00:23:21.054 "adrfam": "IPv4", 00:23:21.054 "traddr": "10.0.0.1", 00:23:21.054 "trsvcid": "45624" 00:23:21.054 }, 00:23:21.054 "auth": { 00:23:21.054 "state": "completed", 00:23:21.054 "digest": "sha384", 00:23:21.054 "dhgroup": "ffdhe2048" 00:23:21.054 } 00:23:21.054 } 00:23:21.054 ]' 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:21.054 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.311 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:21.312 00:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.877 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.136 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.393 00:23:22.394 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.394 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.394 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.651 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.651 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.651 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.651 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.651 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.651 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.651 { 00:23:22.651 "cntlid": 59, 00:23:22.651 "qid": 0, 00:23:22.651 "state": "enabled", 00:23:22.651 "thread": "nvmf_tgt_poll_group_000", 00:23:22.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:22.651 "listen_address": { 00:23:22.651 "trtype": "TCP", 00:23:22.651 "adrfam": "IPv4", 00:23:22.651 "traddr": "10.0.0.2", 00:23:22.651 "trsvcid": "4420" 00:23:22.651 }, 00:23:22.651 "peer_address": { 00:23:22.651 "trtype": "TCP", 00:23:22.651 "adrfam": "IPv4", 00:23:22.651 "traddr": "10.0.0.1", 00:23:22.652 "trsvcid": "57328" 00:23:22.652 }, 00:23:22.652 "auth": { 00:23:22.652 "state": "completed", 00:23:22.652 "digest": "sha384", 00:23:22.652 "dhgroup": "ffdhe2048" 00:23:22.652 } 00:23:22.652 } 00:23:22.652 ]' 00:23:22.652 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.652 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.652 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.652 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:22.652 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.910 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.910 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.910 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.910 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:22.910 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.476 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.734 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.993 00:23:23.993 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.993 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.993 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.251 { 00:23:24.251 "cntlid": 61, 00:23:24.251 "qid": 0, 00:23:24.251 "state": "enabled", 00:23:24.251 "thread": "nvmf_tgt_poll_group_000", 00:23:24.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:24.251 "listen_address": { 00:23:24.251 "trtype": "TCP", 00:23:24.251 "adrfam": "IPv4", 00:23:24.251 "traddr": "10.0.0.2", 00:23:24.251 "trsvcid": "4420" 00:23:24.251 }, 00:23:24.251 "peer_address": { 00:23:24.251 "trtype": "TCP", 00:23:24.251 "adrfam": "IPv4", 00:23:24.251 "traddr": "10.0.0.1", 00:23:24.251 "trsvcid": "57346" 00:23:24.251 }, 00:23:24.251 "auth": { 00:23:24.251 "state": "completed", 00:23:24.251 "digest": "sha384", 00:23:24.251 "dhgroup": "ffdhe2048" 00:23:24.251 } 00:23:24.251 } 00:23:24.251 ]' 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.251 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.509 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:24.509 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.075 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:25.333 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:25.591 00:23:25.591 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.591 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.591 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.850 { 00:23:25.850 "cntlid": 63, 00:23:25.850 "qid": 0, 00:23:25.850 "state": "enabled", 00:23:25.850 "thread": "nvmf_tgt_poll_group_000", 00:23:25.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:25.850 "listen_address": { 00:23:25.850 "trtype": "TCP", 00:23:25.850 "adrfam": "IPv4", 00:23:25.850 "traddr": "10.0.0.2", 00:23:25.850 "trsvcid": "4420" 00:23:25.850 }, 00:23:25.850 "peer_address": { 00:23:25.850 "trtype": "TCP", 00:23:25.850 "adrfam": "IPv4", 00:23:25.850 "traddr": "10.0.0.1", 00:23:25.850 "trsvcid": "57368" 00:23:25.850 }, 00:23:25.850 "auth": { 00:23:25.850 "state": "completed", 00:23:25.850 "digest": "sha384", 00:23:25.850 "dhgroup": "ffdhe2048" 00:23:25.850 } 00:23:25.850 } 00:23:25.850 ]' 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.850 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.108 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:26.108 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.673 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.938 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:26.938 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.938 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:26.938 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:26.938 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.939 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.197 00:23:27.197 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.197 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.197 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.455 { 00:23:27.455 "cntlid": 65, 00:23:27.455 "qid": 0, 00:23:27.455 "state": "enabled", 00:23:27.455 "thread": "nvmf_tgt_poll_group_000", 00:23:27.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:27.455 "listen_address": { 00:23:27.455 "trtype": "TCP", 00:23:27.455 "adrfam": "IPv4", 00:23:27.455 "traddr": "10.0.0.2", 00:23:27.455 "trsvcid": "4420" 00:23:27.455 }, 00:23:27.455 "peer_address": { 00:23:27.455 "trtype": "TCP", 00:23:27.455 "adrfam": "IPv4", 00:23:27.455 "traddr": "10.0.0.1", 00:23:27.455 "trsvcid": "57382" 00:23:27.455 }, 00:23:27.455 "auth": { 00:23:27.455 "state": "completed", 00:23:27.455 "digest": "sha384", 00:23:27.455 "dhgroup": "ffdhe3072" 00:23:27.455 } 00:23:27.455 } 00:23:27.455 ]' 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.455 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.713 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:27.713 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.279 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.537 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.795 00:23:28.795 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.795 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.795 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.053 { 00:23:29.053 "cntlid": 67, 00:23:29.053 "qid": 0, 00:23:29.053 "state": "enabled", 00:23:29.053 "thread": "nvmf_tgt_poll_group_000", 00:23:29.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:29.053 "listen_address": { 00:23:29.053 "trtype": "TCP", 00:23:29.053 "adrfam": "IPv4", 00:23:29.053 "traddr": "10.0.0.2", 00:23:29.053 "trsvcid": "4420" 00:23:29.053 }, 00:23:29.053 "peer_address": { 00:23:29.053 "trtype": "TCP", 00:23:29.053 "adrfam": "IPv4", 00:23:29.053 "traddr": "10.0.0.1", 00:23:29.053 "trsvcid": "57414" 00:23:29.053 }, 00:23:29.053 "auth": { 00:23:29.053 "state": "completed", 00:23:29.053 "digest": "sha384", 00:23:29.053 "dhgroup": "ffdhe3072" 00:23:29.053 } 00:23:29.053 } 00:23:29.053 ]' 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.053 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.312 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:29.312 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.877 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.136 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.395 00:23:30.395 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.395 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.395 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.653 { 00:23:30.653 "cntlid": 69, 00:23:30.653 "qid": 0, 00:23:30.653 "state": "enabled", 00:23:30.653 "thread": "nvmf_tgt_poll_group_000", 00:23:30.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:30.653 "listen_address": { 00:23:30.653 "trtype": "TCP", 00:23:30.653 "adrfam": "IPv4", 00:23:30.653 "traddr": "10.0.0.2", 00:23:30.653 "trsvcid": "4420" 00:23:30.653 }, 00:23:30.653 "peer_address": { 00:23:30.653 "trtype": "TCP", 00:23:30.653 "adrfam": "IPv4", 00:23:30.653 "traddr": "10.0.0.1", 00:23:30.653 "trsvcid": "57430" 00:23:30.653 }, 00:23:30.653 "auth": { 00:23:30.653 "state": "completed", 00:23:30.653 "digest": "sha384", 00:23:30.653 "dhgroup": "ffdhe3072" 00:23:30.653 } 00:23:30.653 } 00:23:30.653 ]' 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.653 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.653 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:30.653 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.653 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.653 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.653 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.910 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:30.910 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.475 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:31.733 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:31.734 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:31.991 00:23:31.992 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.992 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.992 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.249 { 00:23:32.249 "cntlid": 71, 00:23:32.249 "qid": 0, 00:23:32.249 "state": "enabled", 00:23:32.249 "thread": "nvmf_tgt_poll_group_000", 00:23:32.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:32.249 "listen_address": { 00:23:32.249 "trtype": "TCP", 00:23:32.249 "adrfam": "IPv4", 00:23:32.249 "traddr": "10.0.0.2", 00:23:32.249 "trsvcid": "4420" 00:23:32.249 }, 00:23:32.249 "peer_address": { 00:23:32.249 "trtype": "TCP", 00:23:32.249 "adrfam": "IPv4", 00:23:32.249 "traddr": "10.0.0.1", 00:23:32.249 "trsvcid": "57446" 00:23:32.249 }, 00:23:32.249 "auth": { 00:23:32.249 "state": "completed", 00:23:32.249 "digest": "sha384", 00:23:32.249 "dhgroup": "ffdhe3072" 00:23:32.249 } 00:23:32.249 } 00:23:32.249 ]' 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.249 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.506 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:32.506 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.071 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.328 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.585 00:23:33.585 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.585 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.585 00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.585 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.585 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.585 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.585 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.843 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.843 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.843 { 00:23:33.843 "cntlid": 73, 00:23:33.843 "qid": 0, 00:23:33.843 "state": "enabled", 00:23:33.843 "thread": "nvmf_tgt_poll_group_000", 00:23:33.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:33.843 "listen_address": { 00:23:33.843 "trtype": "TCP", 00:23:33.843 "adrfam": "IPv4", 00:23:33.843 "traddr": "10.0.0.2", 00:23:33.843 "trsvcid": "4420" 00:23:33.843 }, 00:23:33.843 "peer_address": { 00:23:33.843 "trtype": "TCP", 00:23:33.843 "adrfam": "IPv4", 00:23:33.843 "traddr": "10.0.0.1", 00:23:33.844 "trsvcid": "34166" 00:23:33.844 }, 00:23:33.844 "auth": { 00:23:33.844 "state": "completed", 00:23:33.844 "digest": "sha384", 00:23:33.844 "dhgroup": "ffdhe4096" 00:23:33.844 } 00:23:33.844 } 00:23:33.844 ]' 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.844 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.101 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:34.101 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:34.666 00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.924 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.183 00:23:35.183 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:35.183 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:35.183 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.440 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:35.441 { 00:23:35.441 "cntlid": 75, 00:23:35.441 "qid": 0, 00:23:35.441 "state": "enabled", 00:23:35.441 "thread": "nvmf_tgt_poll_group_000", 00:23:35.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:35.441 "listen_address": { 00:23:35.441 "trtype": "TCP", 00:23:35.441 "adrfam": "IPv4", 00:23:35.441 "traddr": "10.0.0.2", 00:23:35.441 "trsvcid": "4420" 00:23:35.441 }, 00:23:35.441 "peer_address": { 00:23:35.441 "trtype": "TCP", 00:23:35.441 "adrfam": "IPv4", 00:23:35.441 "traddr": "10.0.0.1", 00:23:35.441 "trsvcid": "34200" 00:23:35.441 }, 00:23:35.441 "auth": { 00:23:35.441 "state": "completed", 00:23:35.441 "digest": "sha384", 00:23:35.441 "dhgroup": "ffdhe4096" 00:23:35.441 } 00:23:35.441 } 00:23:35.441 ]' 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.441 00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.701 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:35.701 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.268 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.526 00:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.784 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.784 { 00:23:36.784 "cntlid": 77, 00:23:36.784 "qid": 0, 00:23:36.784 "state": "enabled", 00:23:36.784 "thread": "nvmf_tgt_poll_group_000", 00:23:36.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:36.784 "listen_address": { 00:23:36.784 "trtype": "TCP", 00:23:36.784 "adrfam": "IPv4", 00:23:36.784 "traddr": "10.0.0.2", 00:23:36.784 "trsvcid": "4420" 00:23:36.784 }, 00:23:36.784 "peer_address": { 00:23:36.784 "trtype": "TCP", 00:23:36.784 "adrfam": "IPv4", 00:23:36.784 "traddr": "10.0.0.1", 00:23:36.784 "trsvcid": "34234" 00:23:36.784 }, 00:23:36.784 "auth": { 00:23:36.784 "state": "completed", 00:23:36.784 "digest": "sha384", 00:23:36.784 "dhgroup": "ffdhe4096" 00:23:36.784 } 00:23:36.784 } 00:23:36.784 ]' 00:23:36.784 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.043 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.301 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:37.301 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.876 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.877 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:37.877 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.877 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:38.134 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.392 { 00:23:38.392 "cntlid": 79, 00:23:38.392 "qid": 0, 00:23:38.392 "state": "enabled", 00:23:38.392 "thread": "nvmf_tgt_poll_group_000", 00:23:38.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:38.392 "listen_address": { 00:23:38.392 "trtype": "TCP", 00:23:38.392 "adrfam": "IPv4", 00:23:38.392 "traddr": "10.0.0.2", 00:23:38.392 "trsvcid": "4420" 00:23:38.392 }, 00:23:38.392 "peer_address": { 00:23:38.392 "trtype": "TCP", 00:23:38.392 "adrfam": "IPv4", 00:23:38.392 "traddr": "10.0.0.1", 00:23:38.392 "trsvcid": "34248" 00:23:38.392 }, 00:23:38.392 "auth": { 00:23:38.392 "state": "completed", 00:23:38.392 "digest": "sha384", 00:23:38.392 "dhgroup": "ffdhe4096" 00:23:38.392 } 00:23:38.392 } 00:23:38.392 ]' 00:23:38.392 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.650 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.908 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:38.908 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:39.473 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.474 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.039 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.039 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.040 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.040 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.040 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.040 { 00:23:40.040 "cntlid": 81, 00:23:40.040 "qid": 0, 00:23:40.040 "state": "enabled", 00:23:40.040 "thread": "nvmf_tgt_poll_group_000", 00:23:40.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:40.040 "listen_address": { 00:23:40.040 "trtype": "TCP", 00:23:40.040 "adrfam": "IPv4", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "trsvcid": "4420" 00:23:40.040 }, 00:23:40.040 "peer_address": { 00:23:40.040 "trtype": "TCP", 00:23:40.040 "adrfam": "IPv4", 00:23:40.040 "traddr": "10.0.0.1", 00:23:40.040 "trsvcid": "34278" 00:23:40.040 }, 00:23:40.040 "auth": { 00:23:40.040 "state": "completed", 00:23:40.040 "digest": "sha384", 00:23:40.040 "dhgroup": "ffdhe6144" 00:23:40.040 } 00:23:40.040 } 00:23:40.040 ]' 00:23:40.040 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.297 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.555 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:40.555 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.118 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.684 00:23:41.684 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.684 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.684 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.684 { 00:23:41.684 "cntlid": 83, 00:23:41.684 "qid": 0, 00:23:41.684 "state": "enabled", 00:23:41.684 "thread": "nvmf_tgt_poll_group_000", 00:23:41.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:41.684 "listen_address": { 00:23:41.684 "trtype": "TCP", 00:23:41.684 "adrfam": "IPv4", 00:23:41.684 "traddr": "10.0.0.2", 00:23:41.684 "trsvcid": "4420" 00:23:41.684 }, 00:23:41.684 "peer_address": { 00:23:41.684 "trtype": "TCP", 00:23:41.684 "adrfam": "IPv4", 00:23:41.684 "traddr": "10.0.0.1", 00:23:41.684 "trsvcid": "34306" 00:23:41.684 }, 00:23:41.684 "auth": { 00:23:41.684 "state": "completed", 00:23:41.684 "digest": "sha384", 00:23:41.684 "dhgroup": "ffdhe6144" 00:23:41.684 } 00:23:41.684 } 00:23:41.684 ]' 00:23:41.684 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.943 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.201 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:42.201 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:42.767 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:42.767 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.768 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.338 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.338 { 00:23:43.338 "cntlid": 85, 00:23:43.338 "qid": 0, 00:23:43.338 "state": "enabled", 00:23:43.338 "thread": "nvmf_tgt_poll_group_000", 00:23:43.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:43.338 "listen_address": { 00:23:43.338 "trtype": "TCP", 00:23:43.338 "adrfam": "IPv4", 00:23:43.338 "traddr": "10.0.0.2", 00:23:43.338 "trsvcid": "4420" 00:23:43.338 }, 00:23:43.338 "peer_address": { 00:23:43.338 "trtype": "TCP", 00:23:43.338 "adrfam": "IPv4", 00:23:43.338 "traddr": "10.0.0.1", 00:23:43.338 "trsvcid": "48246" 00:23:43.338 }, 00:23:43.338 "auth": { 00:23:43.338 "state": "completed", 00:23:43.338 "digest": "sha384", 00:23:43.338 "dhgroup": "ffdhe6144" 00:23:43.338 } 00:23:43.338 } 00:23:43.338 ]' 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:43.338 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.597 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:43.597 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.597 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.597 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.597 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.597 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:43.597 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.162 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:44.420 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:44.985 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.985 { 00:23:44.985 "cntlid": 87, 00:23:44.985 "qid": 0, 00:23:44.985 "state": "enabled", 00:23:44.985 "thread": "nvmf_tgt_poll_group_000", 00:23:44.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:44.985 "listen_address": { 00:23:44.985 "trtype": "TCP", 00:23:44.985 "adrfam": "IPv4", 00:23:44.985 "traddr": "10.0.0.2", 00:23:44.985 "trsvcid": "4420" 00:23:44.985 }, 00:23:44.985 "peer_address": { 00:23:44.985 "trtype": "TCP", 00:23:44.985 "adrfam": "IPv4", 00:23:44.985 "traddr": "10.0.0.1", 00:23:44.985 "trsvcid": "48282" 00:23:44.985 }, 00:23:44.985 "auth": { 00:23:44.985 "state": "completed", 00:23:44.985 "digest": "sha384", 00:23:44.985 "dhgroup": "ffdhe6144" 00:23:44.985 } 00:23:44.985 } 00:23:44.985 ]' 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:44.985 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.243 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.243 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.243 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.243 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:45.243 00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:45.809 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.809 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:45.809 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.809 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.809 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.067 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.632 00:23:46.632 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.632 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.632 00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.891 { 00:23:46.891 "cntlid": 89, 00:23:46.891 "qid": 0, 00:23:46.891 "state": "enabled", 00:23:46.891 "thread": "nvmf_tgt_poll_group_000", 00:23:46.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:46.891 "listen_address": { 00:23:46.891 "trtype": "TCP", 00:23:46.891 "adrfam": "IPv4", 00:23:46.891 "traddr": "10.0.0.2", 00:23:46.891 "trsvcid": "4420" 00:23:46.891 }, 00:23:46.891 "peer_address": { 00:23:46.891 "trtype": "TCP", 00:23:46.891 "adrfam": "IPv4", 00:23:46.891 "traddr": "10.0.0.1", 00:23:46.891 "trsvcid": "48314" 00:23:46.891 }, 00:23:46.891 "auth": { 00:23:46.891 "state": "completed", 00:23:46.891 "digest": "sha384", 00:23:46.891 "dhgroup": "ffdhe8192" 00:23:46.891 } 00:23:46.891 } 00:23:46.891 ]' 00:23:46.891 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.892 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.150 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:47.150 00:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.715 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.973 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.545 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.545 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:48.545 { 00:23:48.545 "cntlid": 91, 00:23:48.545 "qid": 0, 00:23:48.545 "state": "enabled", 00:23:48.545 "thread": "nvmf_tgt_poll_group_000", 00:23:48.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:48.545 "listen_address": { 00:23:48.545 "trtype": "TCP", 00:23:48.545 "adrfam": "IPv4", 00:23:48.545 "traddr": "10.0.0.2", 00:23:48.545 "trsvcid": "4420" 00:23:48.545 }, 00:23:48.545 "peer_address": { 00:23:48.545 "trtype": "TCP", 00:23:48.545 "adrfam": "IPv4", 00:23:48.545 "traddr": "10.0.0.1", 00:23:48.545 "trsvcid": "48344" 00:23:48.545 }, 00:23:48.545 "auth": { 00:23:48.545 "state": "completed", 00:23:48.545 "digest": "sha384", 00:23:48.545 "dhgroup": "ffdhe8192" 00:23:48.545 } 00:23:48.545 } 00:23:48.546 ]' 00:23:48.546 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:48.546 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:48.546 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:48.804 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.372 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.630 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.631 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.631 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.631 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.196 00:23:50.196 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.196 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.196 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:50.454 { 00:23:50.454 "cntlid": 93, 00:23:50.454 "qid": 0, 00:23:50.454 "state": "enabled", 00:23:50.454 "thread": "nvmf_tgt_poll_group_000", 00:23:50.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:50.454 "listen_address": { 00:23:50.454 "trtype": "TCP", 00:23:50.454 "adrfam": "IPv4", 00:23:50.454 "traddr": "10.0.0.2", 00:23:50.454 "trsvcid": "4420" 00:23:50.454 }, 00:23:50.454 "peer_address": { 00:23:50.454 "trtype": "TCP", 00:23:50.454 "adrfam": "IPv4", 00:23:50.454 "traddr": "10.0.0.1", 00:23:50.454 "trsvcid": "48360" 00:23:50.454 }, 00:23:50.454 "auth": { 00:23:50.454 "state": "completed", 00:23:50.454 "digest": "sha384", 00:23:50.454 "dhgroup": "ffdhe8192" 00:23:50.454 } 00:23:50.454 } 00:23:50.454 ]' 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.454 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.715 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:50.715 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.278 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:51.536 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.100 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.100 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.100 { 00:23:52.100 "cntlid": 95, 00:23:52.101 "qid": 0, 00:23:52.101 "state": "enabled", 00:23:52.101 "thread": "nvmf_tgt_poll_group_000", 00:23:52.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:52.101 "listen_address": { 00:23:52.101 "trtype": "TCP", 00:23:52.101 "adrfam": "IPv4", 00:23:52.101 "traddr": "10.0.0.2", 00:23:52.101 "trsvcid": "4420" 00:23:52.101 }, 00:23:52.101 "peer_address": { 00:23:52.101 "trtype": "TCP", 00:23:52.101 "adrfam": "IPv4", 00:23:52.101 "traddr": "10.0.0.1", 00:23:52.101 "trsvcid": "48398" 00:23:52.101 }, 00:23:52.101 "auth": { 00:23:52.101 "state": "completed", 00:23:52.101 "digest": "sha384", 00:23:52.101 "dhgroup": "ffdhe8192" 00:23:52.101 } 00:23:52.101 } 00:23:52.101 ]' 00:23:52.101 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.358 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.629 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:52.629 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.194 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.195 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.453 00:23:53.453 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.453 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.453 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:53.711 { 00:23:53.711 "cntlid": 97, 00:23:53.711 "qid": 0, 00:23:53.711 "state": "enabled", 00:23:53.711 "thread": "nvmf_tgt_poll_group_000", 00:23:53.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:53.711 "listen_address": { 00:23:53.711 "trtype": "TCP", 00:23:53.711 "adrfam": "IPv4", 00:23:53.711 "traddr": "10.0.0.2", 00:23:53.711 "trsvcid": "4420" 00:23:53.711 }, 00:23:53.711 "peer_address": { 00:23:53.711 "trtype": "TCP", 00:23:53.711 "adrfam": "IPv4", 00:23:53.711 "traddr": "10.0.0.1", 00:23:53.711 "trsvcid": "46556" 00:23:53.711 }, 00:23:53.711 "auth": { 00:23:53.711 "state": "completed", 00:23:53.711 "digest": "sha512", 00:23:53.711 "dhgroup": "null" 00:23:53.711 } 00:23:53.711 } 00:23:53.711 ]' 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:53.711 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.969 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:53.969 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.969 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.969 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.969 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.226 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:54.226 00:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.792 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.050 00:23:55.050 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:55.050 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:55.050 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.308 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:55.308 { 00:23:55.308 "cntlid": 99, 00:23:55.308 "qid": 0, 00:23:55.308 "state": "enabled", 00:23:55.309 "thread": "nvmf_tgt_poll_group_000", 00:23:55.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:55.309 "listen_address": { 00:23:55.309 "trtype": "TCP", 00:23:55.309 "adrfam": "IPv4", 00:23:55.309 "traddr": "10.0.0.2", 00:23:55.309 "trsvcid": "4420" 00:23:55.309 }, 00:23:55.309 "peer_address": { 00:23:55.309 "trtype": "TCP", 00:23:55.309 "adrfam": "IPv4", 00:23:55.309 "traddr": "10.0.0.1", 00:23:55.309 "trsvcid": "46570" 00:23:55.309 }, 00:23:55.309 "auth": { 00:23:55.309 "state": "completed", 00:23:55.309 "digest": "sha512", 00:23:55.309 "dhgroup": "null" 00:23:55.309 } 00:23:55.309 } 00:23:55.309 ]' 00:23:55.309 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:55.309 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:55.309 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:55.567 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:55.567 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:55.567 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.567 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.567 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.567 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:55.567 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:23:56.133 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.133 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:56.133 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.133 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.394 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.653 00:23:56.653 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.653 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.653 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.911 { 00:23:56.911 "cntlid": 101, 00:23:56.911 "qid": 0, 00:23:56.911 "state": "enabled", 00:23:56.911 "thread": "nvmf_tgt_poll_group_000", 00:23:56.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:56.911 "listen_address": { 00:23:56.911 "trtype": "TCP", 00:23:56.911 "adrfam": "IPv4", 00:23:56.911 "traddr": "10.0.0.2", 00:23:56.911 "trsvcid": "4420" 00:23:56.911 }, 00:23:56.911 "peer_address": { 00:23:56.911 "trtype": "TCP", 00:23:56.911 "adrfam": "IPv4", 00:23:56.911 "traddr": "10.0.0.1", 00:23:56.911 "trsvcid": "46596" 00:23:56.911 }, 00:23:56.911 "auth": { 00:23:56.911 "state": "completed", 00:23:56.911 "digest": "sha512", 00:23:56.911 "dhgroup": "null" 00:23:56.911 } 00:23:56.911 } 00:23:56.911 ]' 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:56.911 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.912 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.912 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.912 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.169 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:57.169 00:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:57.733 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.991 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:58.249 00:23:58.249 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:58.249 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:58.250 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.507 { 00:23:58.507 "cntlid": 103, 00:23:58.507 "qid": 0, 00:23:58.507 "state": "enabled", 00:23:58.507 "thread": "nvmf_tgt_poll_group_000", 00:23:58.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:23:58.507 "listen_address": { 00:23:58.507 "trtype": "TCP", 00:23:58.507 "adrfam": "IPv4", 00:23:58.507 "traddr": "10.0.0.2", 00:23:58.507 "trsvcid": "4420" 00:23:58.507 }, 00:23:58.507 "peer_address": { 00:23:58.507 "trtype": "TCP", 00:23:58.507 "adrfam": "IPv4", 00:23:58.507 "traddr": "10.0.0.1", 00:23:58.507 "trsvcid": "46624" 00:23:58.507 }, 00:23:58.507 "auth": { 00:23:58.507 "state": "completed", 00:23:58.507 "digest": "sha512", 00:23:58.507 "dhgroup": "null" 00:23:58.507 } 00:23:58.507 } 00:23:58.507 ]' 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.507 00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.764 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:58.764 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.329 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.587 00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.844 00:23:59.844 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.844 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.844 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.102 { 00:24:00.102 "cntlid": 105, 00:24:00.102 "qid": 0, 00:24:00.102 "state": "enabled", 00:24:00.102 "thread": "nvmf_tgt_poll_group_000", 00:24:00.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:00.102 "listen_address": { 00:24:00.102 "trtype": "TCP", 00:24:00.102 "adrfam": "IPv4", 00:24:00.102 "traddr": "10.0.0.2", 00:24:00.102 "trsvcid": "4420" 00:24:00.102 }, 00:24:00.102 "peer_address": { 00:24:00.102 "trtype": "TCP", 00:24:00.102 "adrfam": "IPv4", 00:24:00.102 "traddr": "10.0.0.1", 00:24:00.102 "trsvcid": "46662" 00:24:00.102 }, 00:24:00.102 "auth": { 00:24:00.102 "state": "completed", 00:24:00.102 "digest": "sha512", 00:24:00.102 "dhgroup": "ffdhe2048" 00:24:00.102 } 00:24:00.102 } 00:24:00.102 ]' 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.102 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.360 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:00.360 00:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:00.925 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.181 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.439 00:24:01.439 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.439 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.439 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.439 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.439 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:01.697 { 00:24:01.697 "cntlid": 107, 00:24:01.697 "qid": 0, 00:24:01.697 "state": "enabled", 00:24:01.697 "thread": "nvmf_tgt_poll_group_000", 00:24:01.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:01.697 "listen_address": { 00:24:01.697 "trtype": "TCP", 00:24:01.697 "adrfam": "IPv4", 00:24:01.697 "traddr": "10.0.0.2", 00:24:01.697 "trsvcid": "4420" 00:24:01.697 }, 00:24:01.697 "peer_address": { 00:24:01.697 "trtype": "TCP", 00:24:01.697 "adrfam": "IPv4", 00:24:01.697 "traddr": "10.0.0.1", 00:24:01.697 "trsvcid": "46684" 00:24:01.697 }, 00:24:01.697 "auth": { 00:24:01.697 "state": "completed", 00:24:01.697 "digest": "sha512", 00:24:01.697 "dhgroup": "ffdhe2048" 00:24:01.697 } 00:24:01.697 } 00:24:01.697 ]' 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.697 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:01.697 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.697 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.697 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.956 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:01.956 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.521 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.780 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.039 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.039 { 00:24:03.039 "cntlid": 109, 00:24:03.039 "qid": 0, 00:24:03.039 "state": "enabled", 00:24:03.039 "thread": "nvmf_tgt_poll_group_000", 00:24:03.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:03.039 "listen_address": { 00:24:03.039 "trtype": "TCP", 00:24:03.039 "adrfam": "IPv4", 00:24:03.039 "traddr": "10.0.0.2", 00:24:03.039 "trsvcid": "4420" 00:24:03.039 }, 00:24:03.039 "peer_address": { 00:24:03.039 "trtype": "TCP", 00:24:03.039 "adrfam": "IPv4", 00:24:03.039 "traddr": "10.0.0.1", 00:24:03.039 "trsvcid": "45970" 00:24:03.039 }, 00:24:03.039 "auth": { 00:24:03.039 "state": "completed", 00:24:03.039 "digest": "sha512", 00:24:03.039 "dhgroup": "ffdhe2048" 00:24:03.039 } 00:24:03.039 } 00:24:03.039 ]' 00:24:03.039 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.297 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.555 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:03.555 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:04.121 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:04.380 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:04.380 00:24:04.637 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:04.637 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.637 00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:04.637 { 00:24:04.637 "cntlid": 111, 00:24:04.637 "qid": 0, 00:24:04.637 "state": "enabled", 00:24:04.637 "thread": "nvmf_tgt_poll_group_000", 00:24:04.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:04.637 "listen_address": { 00:24:04.637 "trtype": "TCP", 00:24:04.637 "adrfam": "IPv4", 00:24:04.637 "traddr": "10.0.0.2", 00:24:04.637 "trsvcid": "4420" 00:24:04.637 }, 00:24:04.637 "peer_address": { 00:24:04.637 "trtype": "TCP", 00:24:04.637 "adrfam": "IPv4", 00:24:04.637 "traddr": "10.0.0.1", 00:24:04.637 "trsvcid": "46000" 00:24:04.637 }, 00:24:04.637 "auth": { 00:24:04.637 "state": "completed", 00:24:04.637 "digest": "sha512", 00:24:04.637 "dhgroup": "ffdhe2048" 00:24:04.637 } 00:24:04.637 } 00:24:04.637 ]' 00:24:04.637 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.894 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.152 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:05.153 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:05.718 00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.718 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.977 00:24:05.977 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:05.977 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.977 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.235 { 00:24:06.235 "cntlid": 113, 00:24:06.235 "qid": 0, 00:24:06.235 "state": "enabled", 00:24:06.235 "thread": "nvmf_tgt_poll_group_000", 00:24:06.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:06.235 "listen_address": { 00:24:06.235 "trtype": "TCP", 00:24:06.235 "adrfam": "IPv4", 00:24:06.235 "traddr": "10.0.0.2", 00:24:06.235 "trsvcid": "4420" 00:24:06.235 }, 00:24:06.235 "peer_address": { 00:24:06.235 "trtype": "TCP", 00:24:06.235 "adrfam": "IPv4", 00:24:06.235 "traddr": "10.0.0.1", 00:24:06.235 "trsvcid": "46020" 00:24:06.235 }, 00:24:06.235 "auth": { 00:24:06.235 "state": "completed", 00:24:06.235 "digest": "sha512", 00:24:06.235 "dhgroup": "ffdhe3072" 00:24:06.235 } 00:24:06.235 } 00:24:06.235 ]' 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.235 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:06.493 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:06.493 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:06.493 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.493 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.493 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.753 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:06.753 00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.320 00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.578 00:24:07.578 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:07.578 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:07.578 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:07.836 { 00:24:07.836 "cntlid": 115, 00:24:07.836 "qid": 0, 00:24:07.836 "state": "enabled", 00:24:07.836 "thread": "nvmf_tgt_poll_group_000", 00:24:07.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:07.836 "listen_address": { 00:24:07.836 "trtype": "TCP", 00:24:07.836 "adrfam": "IPv4", 00:24:07.836 "traddr": "10.0.0.2", 00:24:07.836 "trsvcid": "4420" 00:24:07.836 }, 00:24:07.836 "peer_address": { 00:24:07.836 "trtype": "TCP", 00:24:07.836 "adrfam": "IPv4", 00:24:07.836 "traddr": "10.0.0.1", 00:24:07.836 "trsvcid": "46042" 00:24:07.836 }, 00:24:07.836 "auth": { 00:24:07.836 "state": "completed", 00:24:07.836 "digest": "sha512", 00:24:07.836 "dhgroup": "ffdhe3072" 00:24:07.836 } 00:24:07.836 } 00:24:07.836 ]' 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:07.836 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:08.093 00:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:08.659 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.659 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:08.659 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.659 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.918 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.176 00:24:09.176 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:09.176 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:09.176 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:09.434 { 00:24:09.434 "cntlid": 117, 00:24:09.434 "qid": 0, 00:24:09.434 "state": "enabled", 00:24:09.434 "thread": "nvmf_tgt_poll_group_000", 00:24:09.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:09.434 "listen_address": { 00:24:09.434 "trtype": "TCP", 00:24:09.434 "adrfam": "IPv4", 00:24:09.434 "traddr": "10.0.0.2", 00:24:09.434 "trsvcid": "4420" 00:24:09.434 }, 00:24:09.434 "peer_address": { 00:24:09.434 "trtype": "TCP", 00:24:09.434 "adrfam": "IPv4", 00:24:09.434 "traddr": "10.0.0.1", 00:24:09.434 "trsvcid": "46074" 00:24:09.434 }, 00:24:09.434 "auth": { 00:24:09.434 "state": "completed", 00:24:09.434 "digest": "sha512", 00:24:09.434 "dhgroup": "ffdhe3072" 00:24:09.434 } 00:24:09.434 } 00:24:09.434 ]' 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:09.434 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:09.692 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.692 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.692 00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.692 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:09.692 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:10.256 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:10.514 00:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:10.775 00:24:10.775 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:10.775 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:10.775 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:11.034 { 00:24:11.034 "cntlid": 119, 00:24:11.034 "qid": 0, 00:24:11.034 "state": "enabled", 00:24:11.034 "thread": "nvmf_tgt_poll_group_000", 00:24:11.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:11.034 "listen_address": { 00:24:11.034 "trtype": "TCP", 00:24:11.034 "adrfam": "IPv4", 00:24:11.034 "traddr": "10.0.0.2", 00:24:11.034 "trsvcid": "4420" 00:24:11.034 }, 00:24:11.034 "peer_address": { 00:24:11.034 "trtype": "TCP", 00:24:11.034 "adrfam": "IPv4", 00:24:11.034 "traddr": "10.0.0.1", 00:24:11.034 "trsvcid": "46098" 00:24:11.034 }, 00:24:11.034 "auth": { 00:24:11.034 "state": "completed", 00:24:11.034 "digest": "sha512", 00:24:11.034 "dhgroup": "ffdhe3072" 00:24:11.034 } 00:24:11.034 } 00:24:11.034 ]' 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:11.034 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:11.292 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.292 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.292 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.292 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:11.292 00:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.858 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.116 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.373 00:24:12.373 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:12.373 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:12.373 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.686 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.686 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.686 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.686 00:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.686 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.686 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:12.686 { 00:24:12.686 "cntlid": 121, 00:24:12.686 "qid": 0, 00:24:12.686 "state": "enabled", 00:24:12.686 "thread": "nvmf_tgt_poll_group_000", 00:24:12.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:12.686 "listen_address": { 00:24:12.686 "trtype": "TCP", 00:24:12.686 "adrfam": "IPv4", 00:24:12.686 "traddr": "10.0.0.2", 00:24:12.686 "trsvcid": "4420" 00:24:12.686 }, 00:24:12.686 "peer_address": { 00:24:12.686 "trtype": "TCP", 00:24:12.686 "adrfam": "IPv4", 00:24:12.686 "traddr": "10.0.0.1", 00:24:12.686 "trsvcid": "39958" 00:24:12.686 }, 00:24:12.686 "auth": { 00:24:12.686 "state": "completed", 00:24:12.686 "digest": "sha512", 00:24:12.687 "dhgroup": "ffdhe4096" 00:24:12.687 } 00:24:12.687 } 00:24:12.687 ]' 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.687 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.944 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:12.944 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:13.510 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.511 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.769 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.028 00:24:14.028 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:14.028 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:14.028 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:14.285 { 00:24:14.285 "cntlid": 123, 00:24:14.285 "qid": 0, 00:24:14.285 "state": "enabled", 00:24:14.285 "thread": "nvmf_tgt_poll_group_000", 00:24:14.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:14.285 "listen_address": { 00:24:14.285 "trtype": "TCP", 00:24:14.285 "adrfam": "IPv4", 00:24:14.285 "traddr": "10.0.0.2", 00:24:14.285 "trsvcid": "4420" 00:24:14.285 }, 00:24:14.285 "peer_address": { 00:24:14.285 "trtype": "TCP", 00:24:14.285 "adrfam": "IPv4", 00:24:14.285 "traddr": "10.0.0.1", 00:24:14.285 "trsvcid": "39984" 00:24:14.285 }, 00:24:14.285 "auth": { 00:24:14.285 "state": "completed", 00:24:14.285 "digest": "sha512", 00:24:14.285 "dhgroup": "ffdhe4096" 00:24:14.285 } 00:24:14.285 } 00:24:14.285 ]' 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.285 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.543 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:14.543 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.113 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.371 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.630 00:24:15.630 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:15.630 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:15.630 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.888 { 00:24:15.888 "cntlid": 125, 00:24:15.888 "qid": 0, 00:24:15.888 "state": "enabled", 00:24:15.888 "thread": "nvmf_tgt_poll_group_000", 00:24:15.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:15.888 "listen_address": { 00:24:15.888 "trtype": "TCP", 00:24:15.888 "adrfam": "IPv4", 00:24:15.888 "traddr": "10.0.0.2", 00:24:15.888 "trsvcid": "4420" 00:24:15.888 }, 00:24:15.888 "peer_address": { 00:24:15.888 "trtype": "TCP", 00:24:15.888 "adrfam": "IPv4", 00:24:15.888 "traddr": "10.0.0.1", 00:24:15.888 "trsvcid": "40026" 00:24:15.888 }, 00:24:15.888 "auth": { 00:24:15.888 "state": "completed", 00:24:15.888 "digest": "sha512", 00:24:15.888 "dhgroup": "ffdhe4096" 00:24:15.888 } 00:24:15.888 } 00:24:15.888 ]' 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.888 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.145 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:16.145 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.710 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:16.972 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:17.236 00:24:17.236 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:17.236 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:17.236 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:17.494 { 00:24:17.494 "cntlid": 127, 00:24:17.494 "qid": 0, 00:24:17.494 "state": "enabled", 00:24:17.494 "thread": "nvmf_tgt_poll_group_000", 00:24:17.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:17.494 "listen_address": { 00:24:17.494 "trtype": "TCP", 00:24:17.494 "adrfam": "IPv4", 00:24:17.494 "traddr": "10.0.0.2", 00:24:17.494 "trsvcid": "4420" 00:24:17.494 }, 00:24:17.494 "peer_address": { 00:24:17.494 "trtype": "TCP", 00:24:17.494 "adrfam": "IPv4", 00:24:17.494 "traddr": "10.0.0.1", 00:24:17.494 "trsvcid": "40058" 00:24:17.494 }, 00:24:17.494 "auth": { 00:24:17.494 "state": "completed", 00:24:17.494 "digest": "sha512", 00:24:17.494 "dhgroup": "ffdhe4096" 00:24:17.494 } 00:24:17.494 } 00:24:17.494 ]' 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.494 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.752 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:17.752 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.319 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.576 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.833 00:24:18.833 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:18.833 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.833 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:19.091 { 00:24:19.091 "cntlid": 129, 00:24:19.091 "qid": 0, 00:24:19.091 "state": "enabled", 00:24:19.091 "thread": "nvmf_tgt_poll_group_000", 00:24:19.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:19.091 "listen_address": { 00:24:19.091 "trtype": "TCP", 00:24:19.091 "adrfam": "IPv4", 00:24:19.091 "traddr": "10.0.0.2", 00:24:19.091 "trsvcid": "4420" 00:24:19.091 }, 00:24:19.091 "peer_address": { 00:24:19.091 "trtype": "TCP", 00:24:19.091 "adrfam": "IPv4", 00:24:19.091 "traddr": "10.0.0.1", 00:24:19.091 "trsvcid": "40102" 00:24:19.091 }, 00:24:19.091 "auth": { 00:24:19.091 "state": "completed", 00:24:19.091 "digest": "sha512", 00:24:19.091 "dhgroup": "ffdhe6144" 00:24:19.091 } 00:24:19.091 } 00:24:19.091 ]' 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:19.091 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.348 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.605 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:19.605 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:20.170 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.171 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.429 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.687 00:24:20.687 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:20.687 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:20.687 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:20.944 { 00:24:20.944 "cntlid": 131, 00:24:20.944 "qid": 0, 00:24:20.944 "state": "enabled", 00:24:20.944 "thread": "nvmf_tgt_poll_group_000", 00:24:20.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:20.944 "listen_address": { 00:24:20.944 "trtype": "TCP", 00:24:20.944 "adrfam": "IPv4", 00:24:20.944 "traddr": "10.0.0.2", 00:24:20.944 "trsvcid": "4420" 00:24:20.944 }, 00:24:20.944 "peer_address": { 00:24:20.944 "trtype": "TCP", 00:24:20.944 "adrfam": "IPv4", 00:24:20.944 "traddr": "10.0.0.1", 00:24:20.944 "trsvcid": "40114" 00:24:20.944 }, 00:24:20.944 "auth": { 00:24:20.944 "state": "completed", 00:24:20.944 "digest": "sha512", 00:24:20.944 "dhgroup": "ffdhe6144" 00:24:20.944 } 00:24:20.944 } 00:24:20.944 ]' 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.944 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.215 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:21.215 00:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.781 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.039 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.297 00:24:22.297 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:22.297 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:22.297 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:22.555 { 00:24:22.555 "cntlid": 133, 00:24:22.555 "qid": 0, 00:24:22.555 "state": "enabled", 00:24:22.555 "thread": "nvmf_tgt_poll_group_000", 00:24:22.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:22.555 "listen_address": { 00:24:22.555 "trtype": "TCP", 00:24:22.555 "adrfam": "IPv4", 00:24:22.555 "traddr": "10.0.0.2", 00:24:22.555 "trsvcid": "4420" 00:24:22.555 }, 00:24:22.555 "peer_address": { 00:24:22.555 "trtype": "TCP", 00:24:22.555 "adrfam": "IPv4", 00:24:22.555 "traddr": "10.0.0.1", 00:24:22.555 "trsvcid": "52026" 00:24:22.555 }, 00:24:22.555 "auth": { 00:24:22.555 "state": "completed", 00:24:22.555 "digest": "sha512", 00:24:22.555 "dhgroup": "ffdhe6144" 00:24:22.555 } 00:24:22.555 } 00:24:22.555 ]' 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:22.555 00:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:22.823 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.823 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.823 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.823 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:22.823 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.475 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.749 00:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:23.749 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:23.749 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:24.024 00:24:24.024 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:24.024 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:24.025 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.337 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:24.338 { 00:24:24.338 "cntlid": 135, 00:24:24.338 "qid": 0, 00:24:24.338 "state": "enabled", 00:24:24.338 "thread": "nvmf_tgt_poll_group_000", 00:24:24.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:24.338 "listen_address": { 00:24:24.338 "trtype": "TCP", 00:24:24.338 "adrfam": "IPv4", 00:24:24.338 "traddr": "10.0.0.2", 00:24:24.338 "trsvcid": "4420" 00:24:24.338 }, 00:24:24.338 "peer_address": { 00:24:24.338 "trtype": "TCP", 00:24:24.338 "adrfam": "IPv4", 00:24:24.338 "traddr": "10.0.0.1", 00:24:24.338 "trsvcid": "52042" 00:24:24.338 }, 00:24:24.338 "auth": { 00:24:24.338 "state": "completed", 00:24:24.338 "digest": "sha512", 00:24:24.338 "dhgroup": "ffdhe6144" 00:24:24.338 } 00:24:24.338 } 00:24:24.338 ]' 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.338 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.612 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:24.612 00:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.229 00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.889 00:24:25.889 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:25.889 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:25.889 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.174 { 00:24:26.174 "cntlid": 137, 00:24:26.174 "qid": 0, 00:24:26.174 "state": "enabled", 00:24:26.174 "thread": "nvmf_tgt_poll_group_000", 00:24:26.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:26.174 "listen_address": { 00:24:26.174 "trtype": "TCP", 00:24:26.174 "adrfam": "IPv4", 00:24:26.174 "traddr": "10.0.0.2", 00:24:26.174 "trsvcid": "4420" 00:24:26.174 }, 00:24:26.174 "peer_address": { 00:24:26.174 "trtype": "TCP", 00:24:26.174 "adrfam": "IPv4", 00:24:26.174 "traddr": "10.0.0.1", 00:24:26.174 "trsvcid": "52072" 00:24:26.174 }, 00:24:26.174 "auth": { 00:24:26.174 "state": "completed", 00:24:26.174 "digest": "sha512", 00:24:26.174 "dhgroup": "ffdhe8192" 00:24:26.174 } 00:24:26.174 } 00:24:26.174 ]' 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.174 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.431 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:26.431 00:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:26.995 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.996 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.561 00:24:27.561 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:27.561 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:27.561 00:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:27.826 { 00:24:27.826 "cntlid": 139, 00:24:27.826 "qid": 0, 00:24:27.826 "state": "enabled", 00:24:27.826 "thread": "nvmf_tgt_poll_group_000", 00:24:27.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:27.826 "listen_address": { 00:24:27.826 "trtype": "TCP", 00:24:27.826 "adrfam": "IPv4", 00:24:27.826 "traddr": "10.0.0.2", 00:24:27.826 "trsvcid": "4420" 00:24:27.826 }, 00:24:27.826 "peer_address": { 00:24:27.826 "trtype": "TCP", 00:24:27.826 "adrfam": "IPv4", 00:24:27.826 "traddr": "10.0.0.1", 00:24:27.826 "trsvcid": "52098" 00:24:27.826 }, 00:24:27.826 "auth": { 00:24:27.826 "state": "completed", 00:24:27.826 "digest": "sha512", 00:24:27.826 "dhgroup": "ffdhe8192" 00:24:27.826 } 00:24:27.826 } 00:24:27.826 ]' 00:24:27.826 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.827 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.085 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:28.085 00:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: --dhchap-ctrl-secret DHHC-1:02:ZjFjOWViZjdjM2RhYWYzMGY4MjI2NDc4ZDI0NmM2MDQzZmFlZmFkYjUyM2E4MDMz8cUgTA==: 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:28.655 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.913 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.479 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:29.479 { 00:24:29.479 "cntlid": 141, 00:24:29.479 "qid": 0, 00:24:29.479 "state": "enabled", 00:24:29.479 "thread": "nvmf_tgt_poll_group_000", 00:24:29.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:29.479 "listen_address": { 00:24:29.479 "trtype": "TCP", 00:24:29.479 "adrfam": "IPv4", 00:24:29.479 "traddr": "10.0.0.2", 00:24:29.479 "trsvcid": "4420" 00:24:29.479 }, 00:24:29.479 "peer_address": { 00:24:29.479 "trtype": "TCP", 00:24:29.479 "adrfam": "IPv4", 00:24:29.479 "traddr": "10.0.0.1", 00:24:29.479 "trsvcid": "52134" 00:24:29.479 }, 00:24:29.479 "auth": { 00:24:29.479 "state": "completed", 00:24:29.479 "digest": "sha512", 00:24:29.479 "dhgroup": "ffdhe8192" 00:24:29.479 } 00:24:29.479 } 00:24:29.479 ]' 00:24:29.479 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:29.737 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:29.737 00:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:29.737 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:29.737 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:29.737 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.737 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.737 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.995 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:29.995 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:01:YTNjODZkOTM4YjRlNDRiY2FhMGMxMGI0MmJmNmVlNTJVJVf1: 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:30.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.560 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:30.560 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:31.125 00:24:31.125 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:31.125 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:31.125 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.383 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:31.384 { 00:24:31.384 "cntlid": 143, 00:24:31.384 "qid": 0, 00:24:31.384 "state": "enabled", 00:24:31.384 "thread": "nvmf_tgt_poll_group_000", 00:24:31.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:31.384 "listen_address": { 00:24:31.384 "trtype": "TCP", 00:24:31.384 "adrfam": "IPv4", 00:24:31.384 "traddr": "10.0.0.2", 00:24:31.384 "trsvcid": "4420" 00:24:31.384 }, 00:24:31.384 "peer_address": { 00:24:31.384 "trtype": "TCP", 00:24:31.384 "adrfam": "IPv4", 00:24:31.384 "traddr": "10.0.0.1", 00:24:31.384 "trsvcid": "52162" 00:24:31.384 }, 00:24:31.384 "auth": { 00:24:31.384 "state": "completed", 00:24:31.384 "digest": "sha512", 00:24:31.384 "dhgroup": "ffdhe8192" 00:24:31.384 } 00:24:31.384 } 00:24:31.384 ]' 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.384 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:31.641 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:31.641 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.208 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.467 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.032 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:33.032 { 00:24:33.032 "cntlid": 145, 00:24:33.032 "qid": 0, 00:24:33.032 "state": "enabled", 00:24:33.032 "thread": "nvmf_tgt_poll_group_000", 00:24:33.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:33.032 "listen_address": { 00:24:33.032 "trtype": "TCP", 00:24:33.032 "adrfam": "IPv4", 00:24:33.032 "traddr": "10.0.0.2", 00:24:33.032 "trsvcid": "4420" 00:24:33.032 }, 00:24:33.032 "peer_address": { 00:24:33.032 "trtype": "TCP", 00:24:33.032 "adrfam": "IPv4", 00:24:33.032 "traddr": "10.0.0.1", 00:24:33.032 "trsvcid": "52650" 00:24:33.032 }, 00:24:33.032 "auth": { 00:24:33.032 "state": "completed", 00:24:33.032 "digest": "sha512", 00:24:33.032 "dhgroup": "ffdhe8192" 00:24:33.032 } 00:24:33.032 } 00:24:33.032 ]' 00:24:33.032 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.290 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:33.553 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:33.553 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NTRjYjFhMjFiZTc3MTA3NWYxNWM0N2ZjZDBkMDE1YWEyNGFiNDQ3Nzk1YmUyMzVliwEljg==: --dhchap-ctrl-secret DHHC-1:03:NzllMDA3YWRmNGZkZjVkY2UwOGQwNzM0NTMxNThiOWEwZGY1ZDNiZDExODcxZjJhNzFkODdiY2Q1NTY3OWZjNBx91RU=: 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:34.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:34.121 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:34.378 request: 00:24:34.378 { 00:24:34.378 "name": "nvme0", 00:24:34.378 "trtype": "tcp", 00:24:34.378 "traddr": "10.0.0.2", 00:24:34.378 "adrfam": "ipv4", 00:24:34.378 "trsvcid": "4420", 00:24:34.378 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:34.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:34.378 "prchk_reftag": false, 00:24:34.378 "prchk_guard": false, 00:24:34.378 "hdgst": false, 00:24:34.378 "ddgst": false, 00:24:34.378 "dhchap_key": "key2", 00:24:34.378 "allow_unrecognized_csi": false, 00:24:34.378 "method": "bdev_nvme_attach_controller", 00:24:34.378 "req_id": 1 00:24:34.378 } 00:24:34.378 Got JSON-RPC error response 00:24:34.378 response: 00:24:34.378 { 00:24:34.378 "code": -5, 00:24:34.378 "message": "Input/output error" 00:24:34.378 } 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.378 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:34.635 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.635 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:34.636 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.636 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.636 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.636 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.893 request: 00:24:34.893 { 00:24:34.893 "name": "nvme0", 00:24:34.893 "trtype": "tcp", 00:24:34.893 "traddr": "10.0.0.2", 00:24:34.893 "adrfam": "ipv4", 00:24:34.893 "trsvcid": "4420", 00:24:34.893 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:34.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:34.893 "prchk_reftag": false, 00:24:34.893 "prchk_guard": false, 00:24:34.893 "hdgst": false, 00:24:34.893 "ddgst": false, 00:24:34.893 "dhchap_key": "key1", 00:24:34.893 "dhchap_ctrlr_key": "ckey2", 00:24:34.893 "allow_unrecognized_csi": false, 00:24:34.893 "method": "bdev_nvme_attach_controller", 00:24:34.893 "req_id": 1 00:24:34.893 } 00:24:34.893 Got JSON-RPC error response 00:24:34.893 response: 00:24:34.893 { 00:24:34.893 "code": -5, 00:24:34.893 "message": "Input/output error" 00:24:34.893 } 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:34.893 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.894 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:34.894 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.894 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.894 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.894 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.459 request: 00:24:35.459 { 00:24:35.459 "name": "nvme0", 00:24:35.459 "trtype": "tcp", 00:24:35.459 "traddr": "10.0.0.2", 00:24:35.459 "adrfam": "ipv4", 00:24:35.459 "trsvcid": "4420", 00:24:35.459 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:35.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:35.459 "prchk_reftag": false, 00:24:35.459 "prchk_guard": false, 00:24:35.459 "hdgst": false, 00:24:35.459 "ddgst": false, 00:24:35.459 "dhchap_key": "key1", 00:24:35.459 "dhchap_ctrlr_key": "ckey1", 00:24:35.459 "allow_unrecognized_csi": false, 00:24:35.459 "method": "bdev_nvme_attach_controller", 00:24:35.459 "req_id": 1 00:24:35.459 } 00:24:35.459 Got JSON-RPC error response 00:24:35.459 response: 00:24:35.459 { 00:24:35.459 "code": -5, 00:24:35.459 "message": "Input/output error" 00:24:35.459 } 00:24:35.459 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 402454 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 402454 ']' 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 402454 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402454 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402454' 00:24:35.460 killing process with pid 402454 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 402454 00:24:35.460 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 402454 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=425514 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 425514 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 425514 ']' 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.718 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 425514 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 425514 ']' 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.974 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 null0 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hrd 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.JJ0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JJ0 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9Xt 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.HjC ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HjC 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VF4 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Nb6 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nb6 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KKa 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.232 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.490 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:36.490 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:36.490 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:37.054 nvme0n1 00:24:37.054 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:37.054 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:37.054 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:37.320 { 00:24:37.320 "cntlid": 1, 00:24:37.320 "qid": 0, 00:24:37.320 "state": "enabled", 00:24:37.320 "thread": "nvmf_tgt_poll_group_000", 00:24:37.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:37.320 "listen_address": { 00:24:37.320 "trtype": "TCP", 00:24:37.320 "adrfam": "IPv4", 00:24:37.320 "traddr": "10.0.0.2", 00:24:37.320 "trsvcid": "4420" 00:24:37.320 }, 00:24:37.320 "peer_address": { 00:24:37.320 "trtype": "TCP", 00:24:37.320 "adrfam": "IPv4", 00:24:37.320 "traddr": "10.0.0.1", 00:24:37.320 "trsvcid": "52702" 00:24:37.320 }, 00:24:37.320 "auth": { 00:24:37.320 "state": "completed", 00:24:37.320 "digest": "sha512", 00:24:37.320 "dhgroup": "ffdhe8192" 00:24:37.320 } 00:24:37.320 } 00:24:37.320 ]' 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:37.320 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.321 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.321 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.580 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:37.580 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:38.146 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:38.404 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.405 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:38.405 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.405 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.672 request: 00:24:38.672 { 00:24:38.672 "name": "nvme0", 00:24:38.672 "trtype": "tcp", 00:24:38.672 "traddr": "10.0.0.2", 00:24:38.672 "adrfam": "ipv4", 00:24:38.672 "trsvcid": "4420", 00:24:38.672 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:38.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:38.672 "prchk_reftag": false, 00:24:38.672 "prchk_guard": false, 00:24:38.672 "hdgst": false, 00:24:38.672 "ddgst": false, 00:24:38.672 "dhchap_key": "key3", 00:24:38.672 "allow_unrecognized_csi": false, 00:24:38.672 "method": "bdev_nvme_attach_controller", 00:24:38.672 "req_id": 1 00:24:38.672 } 00:24:38.672 Got JSON-RPC error response 00:24:38.672 response: 00:24:38.672 { 00:24:38.672 "code": -5, 00:24:38.672 "message": "Input/output error" 00:24:38.672 } 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:38.672 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:38.672 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.933 request: 00:24:38.933 { 00:24:38.933 "name": "nvme0", 00:24:38.933 "trtype": "tcp", 00:24:38.933 "traddr": "10.0.0.2", 00:24:38.933 "adrfam": "ipv4", 00:24:38.933 "trsvcid": "4420", 00:24:38.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:38.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:38.933 "prchk_reftag": false, 00:24:38.933 "prchk_guard": false, 00:24:38.933 "hdgst": false, 00:24:38.933 "ddgst": false, 00:24:38.933 "dhchap_key": "key3", 00:24:38.933 "allow_unrecognized_csi": false, 00:24:38.933 "method": "bdev_nvme_attach_controller", 00:24:38.933 "req_id": 1 00:24:38.933 } 00:24:38.933 Got JSON-RPC error response 00:24:38.933 response: 00:24:38.933 { 00:24:38.933 "code": -5, 00:24:38.933 "message": "Input/output error" 00:24:38.933 } 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:38.933 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:39.190 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:39.448 request: 00:24:39.448 { 00:24:39.448 "name": "nvme0", 00:24:39.448 "trtype": "tcp", 00:24:39.448 "traddr": "10.0.0.2", 00:24:39.448 "adrfam": "ipv4", 00:24:39.448 "trsvcid": "4420", 00:24:39.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:39.448 "prchk_reftag": false, 00:24:39.448 "prchk_guard": false, 00:24:39.448 "hdgst": false, 00:24:39.448 "ddgst": false, 00:24:39.448 "dhchap_key": "key0", 00:24:39.448 "dhchap_ctrlr_key": "key1", 00:24:39.448 "allow_unrecognized_csi": false, 00:24:39.448 "method": "bdev_nvme_attach_controller", 00:24:39.448 "req_id": 1 00:24:39.448 } 00:24:39.448 Got JSON-RPC error response 00:24:39.448 response: 00:24:39.448 { 00:24:39.448 "code": -5, 00:24:39.448 "message": "Input/output error" 00:24:39.448 } 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:39.448 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:39.706 nvme0n1 00:24:39.706 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:39.706 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:39.706 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:39.963 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.963 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:39.963 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:40.221 00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:41.152 nvme0n1 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:41.152 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:41.410 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.410 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:41.410 00:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: --dhchap-ctrl-secret DHHC-1:03:ZWFlZGJhOTUwNDFlNmMxZWZhMjQyYWE2N2ZiNjQ1YTE5NDBjMmRlMDc0NTYxOWZiNTg4ZWMyMGNiMzM4NTdhY9GnrSc=: 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:41.983 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:42.240 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:42.497 request: 00:24:42.497 { 00:24:42.497 "name": "nvme0", 00:24:42.497 "trtype": "tcp", 00:24:42.497 "traddr": "10.0.0.2", 00:24:42.497 "adrfam": "ipv4", 00:24:42.497 "trsvcid": "4420", 00:24:42.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:42.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:24:42.497 "prchk_reftag": false, 00:24:42.497 "prchk_guard": false, 00:24:42.497 "hdgst": false, 00:24:42.497 "ddgst": false, 00:24:42.497 "dhchap_key": "key1", 00:24:42.497 "allow_unrecognized_csi": false, 00:24:42.497 "method": "bdev_nvme_attach_controller", 00:24:42.497 "req_id": 1 00:24:42.497 } 00:24:42.497 Got JSON-RPC error response 00:24:42.497 response: 00:24:42.497 { 00:24:42.497 "code": -5, 00:24:42.497 "message": "Input/output error" 00:24:42.497 } 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:42.497 00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:43.430 nvme0n1 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:43.430 00:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:43.688 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:43.946 nvme0n1 00:24:43.946 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:43.946 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:43.946 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.204 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.204 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.204 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: '' 2s 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: ]] 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzFmZGViOTE0OGUyZGMyYjAzNDI3YzZlZjUxZTgxYjc0p2Kv: 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:44.462 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: 2s 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:46.378 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: ]] 00:24:46.379 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmY4OTdjNGY5NmNkOGY5M2JiNjUwMzBmMGI4YmUzYTBjNWJmZWVlMDZmOTMyYjkyGOG1ZA==: 00:24:46.379 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:46.379 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:48.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:48.903 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:49.160 nvme0n1 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:49.160 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:49.724 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:49.724 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:49.724 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.982 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:50.239 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.240 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:50.240 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.240 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:50.240 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:50.802 request: 00:24:50.802 { 00:24:50.802 "name": "nvme0", 00:24:50.802 "dhchap_key": "key1", 00:24:50.802 "dhchap_ctrlr_key": "key3", 00:24:50.802 "method": "bdev_nvme_set_keys", 00:24:50.802 "req_id": 1 00:24:50.802 } 00:24:50.802 Got JSON-RPC error response 00:24:50.802 response: 00:24:50.802 { 00:24:50.802 "code": -13, 00:24:50.802 "message": "Permission denied" 00:24:50.802 } 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:50.802 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.058 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:51.058 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:51.990 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:51.990 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:51.990 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:52.248 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:52.815 nvme0n1 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:52.815 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:53.381 request: 00:24:53.381 { 00:24:53.381 "name": "nvme0", 00:24:53.381 "dhchap_key": "key2", 00:24:53.381 "dhchap_ctrlr_key": "key0", 00:24:53.381 "method": "bdev_nvme_set_keys", 00:24:53.381 "req_id": 1 00:24:53.381 } 00:24:53.381 Got JSON-RPC error response 00:24:53.381 response: 00:24:53.381 { 00:24:53.381 "code": -13, 00:24:53.381 "message": "Permission denied" 00:24:53.381 } 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:53.381 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.640 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:53.640 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:54.573 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:54.573 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:54.573 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 402648 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 402648 ']' 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 402648 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402648 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402648' 00:24:54.832 killing process with pid 402648 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 402648 00:24:54.832 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 402648 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.090 rmmod nvme_tcp 00:24:55.090 rmmod nvme_fabrics 00:24:55.090 rmmod nvme_keyring 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 425514 ']' 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 425514 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 425514 ']' 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 425514 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.090 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425514 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425514' 00:24:55.349 killing process with pid 425514 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 425514 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 425514 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.349 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.350 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.350 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.350 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.350 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.350 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.887 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.887 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hrd /tmp/spdk.key-sha256.9Xt /tmp/spdk.key-sha384.VF4 /tmp/spdk.key-sha512.KKa /tmp/spdk.key-sha512.JJ0 /tmp/spdk.key-sha384.HjC /tmp/spdk.key-sha256.Nb6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:57.887 00:24:57.887 real 2m35.704s 00:24:57.887 user 5m48.231s 00:24:57.887 sys 0m32.441s 00:24:57.887 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.887 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.887 ************************************ 00:24:57.887 END TEST nvmf_auth_target 00:24:57.887 ************************************ 00:24:57.887 00:06:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:57.888 00:06:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:57.888 00:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:57.888 00:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.888 00:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.888 ************************************ 00:24:57.888 START TEST nvmf_bdevio_no_huge 00:24:57.888 ************************************ 00:24:57.888 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:57.888 * Looking for test storage... 00:24:57.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.888 --rc genhtml_branch_coverage=1 00:24:57.888 --rc genhtml_function_coverage=1 00:24:57.888 --rc genhtml_legend=1 00:24:57.888 --rc geninfo_all_blocks=1 00:24:57.888 --rc geninfo_unexecuted_blocks=1 00:24:57.888 00:24:57.888 ' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.888 --rc genhtml_branch_coverage=1 00:24:57.888 --rc genhtml_function_coverage=1 00:24:57.888 --rc genhtml_legend=1 00:24:57.888 --rc geninfo_all_blocks=1 00:24:57.888 --rc geninfo_unexecuted_blocks=1 00:24:57.888 00:24:57.888 ' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.888 --rc genhtml_branch_coverage=1 00:24:57.888 --rc genhtml_function_coverage=1 00:24:57.888 --rc genhtml_legend=1 00:24:57.888 --rc geninfo_all_blocks=1 00:24:57.888 --rc geninfo_unexecuted_blocks=1 00:24:57.888 00:24:57.888 ' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.888 --rc genhtml_branch_coverage=1 00:24:57.888 --rc genhtml_function_coverage=1 00:24:57.888 --rc genhtml_legend=1 00:24:57.888 --rc geninfo_all_blocks=1 00:24:57.888 --rc geninfo_unexecuted_blocks=1 00:24:57.888 00:24:57.888 ' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.888 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.889 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:06.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:06.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:06.011 Found net devices under 0000:af:00.0: cvl_0_0 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:06.011 Found net devices under 0000:af:00.1: cvl_0_1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.011 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:25:06.011 00:25:06.011 --- 10.0.0.2 ping statistics --- 00:25:06.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.011 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:06.012 00:25:06.012 --- 10.0.0.1 ping statistics --- 00:25:06.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.012 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=432549 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 432549 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 432549 ']' 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.012 00:06:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 [2024-12-10 00:06:49.537354] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:06.012 [2024-12-10 00:06:49.537401] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:06.012 [2024-12-10 00:06:49.641040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.012 [2024-12-10 00:06:49.696068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.012 [2024-12-10 00:06:49.696104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.012 [2024-12-10 00:06:49.696113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.012 [2024-12-10 00:06:49.696122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.012 [2024-12-10 00:06:49.696145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.012 [2024-12-10 00:06:49.697548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.012 [2024-12-10 00:06:49.697657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:06.012 [2024-12-10 00:06:49.697743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.012 [2024-12-10 00:06:49.697744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 [2024-12-10 00:06:50.427124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 Malloc0 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:06.012 [2024-12-10 00:06:50.463940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:06.012 { 00:25:06.012 "params": { 00:25:06.012 "name": "Nvme$subsystem", 00:25:06.012 "trtype": "$TEST_TRANSPORT", 00:25:06.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.012 "adrfam": "ipv4", 00:25:06.012 "trsvcid": "$NVMF_PORT", 00:25:06.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.012 "hdgst": ${hdgst:-false}, 00:25:06.012 "ddgst": ${ddgst:-false} 00:25:06.012 }, 00:25:06.012 "method": "bdev_nvme_attach_controller" 00:25:06.012 } 00:25:06.012 EOF 00:25:06.012 )") 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:25:06.012 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:25:06.270 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:25:06.270 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:06.270 "params": { 00:25:06.270 "name": "Nvme1", 00:25:06.270 "trtype": "tcp", 00:25:06.270 "traddr": "10.0.0.2", 00:25:06.270 "adrfam": "ipv4", 00:25:06.270 "trsvcid": "4420", 00:25:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.270 "hdgst": false, 00:25:06.270 "ddgst": false 00:25:06.270 }, 00:25:06.270 "method": "bdev_nvme_attach_controller" 00:25:06.270 }' 00:25:06.270 [2024-12-10 00:06:50.514448] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:06.270 [2024-12-10 00:06:50.514498] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid432802 ] 00:25:06.270 [2024-12-10 00:06:50.611571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:06.270 [2024-12-10 00:06:50.668371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.270 [2024-12-10 00:06:50.668481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.270 [2024-12-10 00:06:50.668482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.527 I/O targets: 00:25:06.527 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:06.527 00:25:06.527 00:25:06.527 CUnit - A unit testing framework for C - Version 2.1-3 00:25:06.527 http://cunit.sourceforge.net/ 00:25:06.527 00:25:06.527 00:25:06.527 Suite: bdevio tests on: Nvme1n1 00:25:06.787 Test: blockdev write read block ...passed 00:25:06.787 Test: blockdev write zeroes read block ...passed 00:25:06.787 Test: blockdev write zeroes read no split ...passed 00:25:06.787 Test: blockdev write zeroes read split ...passed 00:25:06.787 Test: blockdev write zeroes read split partial ...passed 00:25:06.787 Test: blockdev reset ...[2024-12-10 00:06:51.139567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:06.787 [2024-12-10 00:06:51.139636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648a10 (9): Bad file descriptor 00:25:06.787 [2024-12-10 00:06:51.249996] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:25:06.787 passed 00:25:07.045 Test: blockdev write read 8 blocks ...passed 00:25:07.045 Test: blockdev write read size > 128k ...passed 00:25:07.045 Test: blockdev write read invalid size ...passed 00:25:07.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:07.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:07.045 Test: blockdev write read max offset ...passed 00:25:07.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:07.045 Test: blockdev writev readv 8 blocks ...passed 00:25:07.045 Test: blockdev writev readv 30 x 1block ...passed 00:25:07.045 Test: blockdev writev readv block ...passed 00:25:07.045 Test: blockdev writev readv size > 128k ...passed 00:25:07.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:07.045 Test: blockdev comparev and writev ...[2024-12-10 00:06:51.460373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.045 [2024-12-10 00:06:51.460403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.045 [2024-12-10 00:06:51.460419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.460429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.460676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.460688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.460702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.460712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.460957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.460969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.460983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.460992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.461211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.461223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:07.046 [2024-12-10 00:06:51.461237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:07.046 [2024-12-10 00:06:51.461246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:07.046 passed 00:25:07.303 Test: blockdev nvme passthru rw ...passed 00:25:07.303 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:06:51.543164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.303 [2024-12-10 00:06:51.543188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:07.303 [2024-12-10 00:06:51.543296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.303 [2024-12-10 00:06:51.543308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:07.303 [2024-12-10 00:06:51.543406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.303 [2024-12-10 00:06:51.543417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:07.303 [2024-12-10 00:06:51.543518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.303 [2024-12-10 00:06:51.543529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:07.303 passed 00:25:07.303 Test: blockdev nvme admin passthru ...passed 00:25:07.303 Test: blockdev copy ...passed 00:25:07.303 00:25:07.303 Run Summary: Type Total Ran Passed Failed Inactive 00:25:07.303 suites 1 1 n/a 0 0 00:25:07.303 tests 23 23 23 0 0 00:25:07.303 asserts 152 152 152 0 n/a 00:25:07.303 00:25:07.303 Elapsed time = 1.274 seconds 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.561 rmmod nvme_tcp 00:25:07.561 rmmod nvme_fabrics 00:25:07.561 rmmod nvme_keyring 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 432549 ']' 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 432549 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 432549 ']' 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 432549 00:25:07.561 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:25:07.561 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.561 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432549 00:25:07.820 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:25:07.820 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:25:07.820 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432549' 00:25:07.820 killing process with pid 432549 00:25:07.820 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 432549 00:25:07.820 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 432549 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.082 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.617 00:25:10.617 real 0m12.565s 00:25:10.617 user 0m15.256s 00:25:10.617 sys 0m6.771s 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:10.617 ************************************ 00:25:10.617 END TEST nvmf_bdevio_no_huge 00:25:10.617 ************************************ 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.617 ************************************ 00:25:10.617 START TEST nvmf_tls 00:25:10.617 ************************************ 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:10.617 * Looking for test storage... 00:25:10.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.617 --rc genhtml_branch_coverage=1 00:25:10.617 --rc genhtml_function_coverage=1 00:25:10.617 --rc genhtml_legend=1 00:25:10.617 --rc geninfo_all_blocks=1 00:25:10.617 --rc geninfo_unexecuted_blocks=1 00:25:10.617 00:25:10.617 ' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.617 --rc genhtml_branch_coverage=1 00:25:10.617 --rc genhtml_function_coverage=1 00:25:10.617 --rc genhtml_legend=1 00:25:10.617 --rc geninfo_all_blocks=1 00:25:10.617 --rc geninfo_unexecuted_blocks=1 00:25:10.617 00:25:10.617 ' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.617 --rc genhtml_branch_coverage=1 00:25:10.617 --rc genhtml_function_coverage=1 00:25:10.617 --rc genhtml_legend=1 00:25:10.617 --rc geninfo_all_blocks=1 00:25:10.617 --rc geninfo_unexecuted_blocks=1 00:25:10.617 00:25:10.617 ' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:10.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.617 --rc genhtml_branch_coverage=1 00:25:10.617 --rc genhtml_function_coverage=1 00:25:10.617 --rc genhtml_legend=1 00:25:10.617 --rc geninfo_all_blocks=1 00:25:10.617 --rc geninfo_unexecuted_blocks=1 00:25:10.617 00:25:10.617 ' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.617 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.618 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.744 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:18.745 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:18.745 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:18.745 Found net devices under 0000:af:00.0: cvl_0_0 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:18.745 Found net devices under 0000:af:00.1: cvl_0_1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.745 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:25:18.745 00:25:18.745 --- 10.0.0.2 ping statistics --- 00:25:18.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.745 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:25:18.745 00:25:18.745 --- 10.0.0.1 ping statistics --- 00:25:18.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.745 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=436759 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 436759 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 436759 ']' 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.745 00:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 [2024-12-10 00:07:02.224722] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:18.746 [2024-12-10 00:07:02.224777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.746 [2024-12-10 00:07:02.321708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.746 [2024-12-10 00:07:02.362451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.746 [2024-12-10 00:07:02.362486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.746 [2024-12-10 00:07:02.362496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.746 [2024-12-10 00:07:02.362504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.746 [2024-12-10 00:07:02.362511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.746 [2024-12-10 00:07:02.363087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:25:18.746 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:19.004 true 00:25:19.004 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:19.004 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:25:19.004 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:25:19.004 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:25:19.004 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:19.263 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:19.263 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:25:19.522 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:25:19.522 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:25:19.522 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:19.781 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:25:20.040 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:25:20.040 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:25:20.040 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:20.299 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:20.299 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:25:20.299 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:25:20.299 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:25:20.299 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:20.557 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:25:20.557 00:07:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Vb48a25VMw 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gjtnVYyZ5k 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Vb48a25VMw 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gjtnVYyZ5k 00:25:20.815 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:21.072 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:21.330 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Vb48a25VMw 00:25:21.330 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Vb48a25VMw 00:25:21.330 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:21.588 [2024-12-10 00:07:05.840255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.588 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:21.588 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:21.846 [2024-12-10 00:07:06.205176] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.846 [2024-12-10 00:07:06.205403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.846 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:22.104 malloc0 00:25:22.104 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:22.104 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Vb48a25VMw 00:25:22.361 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:22.619 00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Vb48a25VMw 00:25:32.593 Initializing NVMe Controllers 00:25:32.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.593 Initialization complete. Launching workers. 00:25:32.593 ======================================================== 00:25:32.593 Latency(us) 00:25:32.593 Device Information : IOPS MiB/s Average min max 00:25:32.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16373.15 63.96 3908.96 826.77 5785.10 00:25:32.593 ======================================================== 00:25:32.593 Total : 16373.15 63.96 3908.96 826.77 5785.10 00:25:32.593 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vb48a25VMw 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Vb48a25VMw 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=439432 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 439432 /var/tmp/bdevperf.sock 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 439432 ']' 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.593 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.851 [2024-12-10 00:07:17.100252] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:32.851 [2024-12-10 00:07:17.100304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439432 ] 00:25:32.851 [2024-12-10 00:07:17.190575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.851 [2024-12-10 00:07:17.231121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.784 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.784 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:33.784 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Vb48a25VMw 00:25:33.785 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.042 [2024-12-10 00:07:18.310277] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:34.043 TLSTESTn1 00:25:34.043 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:34.043 Running I/O for 10 seconds... 00:25:36.347 5271.00 IOPS, 20.59 MiB/s [2024-12-09T23:07:21.753Z] 5293.50 IOPS, 20.68 MiB/s [2024-12-09T23:07:22.686Z] 5277.00 IOPS, 20.61 MiB/s [2024-12-09T23:07:23.623Z] 5087.75 IOPS, 19.87 MiB/s [2024-12-09T23:07:24.555Z] 5122.20 IOPS, 20.01 MiB/s [2024-12-09T23:07:25.926Z] 5131.33 IOPS, 20.04 MiB/s [2024-12-09T23:07:26.860Z] 5141.86 IOPS, 20.09 MiB/s [2024-12-09T23:07:27.792Z] 5178.50 IOPS, 20.23 MiB/s [2024-12-09T23:07:28.725Z] 5173.89 IOPS, 20.21 MiB/s [2024-12-09T23:07:28.725Z] 5178.80 IOPS, 20.23 MiB/s 00:25:44.252 Latency(us) 00:25:44.252 [2024-12-09T23:07:28.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.252 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:44.252 Verification LBA range: start 0x0 length 0x2000 00:25:44.252 TLSTESTn1 : 10.01 5184.48 20.25 0.00 0.00 24654.99 4666.16 33973.86 00:25:44.252 [2024-12-09T23:07:28.725Z] =================================================================================================================== 00:25:44.252 [2024-12-09T23:07:28.725Z] Total : 5184.48 20.25 0.00 0.00 24654.99 4666.16 33973.86 00:25:44.252 { 00:25:44.252 "results": [ 00:25:44.252 { 00:25:44.252 "job": "TLSTESTn1", 00:25:44.252 "core_mask": "0x4", 00:25:44.252 "workload": "verify", 00:25:44.252 "status": "finished", 00:25:44.252 "verify_range": { 00:25:44.252 "start": 0, 00:25:44.252 "length": 8192 00:25:44.252 }, 00:25:44.252 "queue_depth": 128, 00:25:44.252 "io_size": 4096, 00:25:44.252 "runtime": 10.013532, 00:25:44.252 "iops": 5184.484355769772, 00:25:44.252 "mibps": 20.251892014725673, 00:25:44.252 "io_failed": 0, 00:25:44.252 "io_timeout": 0, 00:25:44.252 "avg_latency_us": 24654.986312487723, 00:25:44.252 "min_latency_us": 4666.1632, 00:25:44.252 "max_latency_us": 33973.8624 00:25:44.252 } 00:25:44.252 ], 00:25:44.252 "core_count": 1 00:25:44.252 } 00:25:44.252 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:44.252 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 439432 00:25:44.252 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 439432 ']' 00:25:44.252 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 439432 00:25:44.252 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439432 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439432' 00:25:44.253 killing process with pid 439432 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 439432 00:25:44.253 Received shutdown signal, test time was about 10.000000 seconds 00:25:44.253 00:25:44.253 Latency(us) 00:25:44.253 [2024-12-09T23:07:28.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.253 [2024-12-09T23:07:28.726Z] =================================================================================================================== 00:25:44.253 [2024-12-09T23:07:28.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.253 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 439432 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gjtnVYyZ5k 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gjtnVYyZ5k 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gjtnVYyZ5k 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gjtnVYyZ5k 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=441289 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 441289 /var/tmp/bdevperf.sock 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 441289 ']' 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:44.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.512 00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.512 [2024-12-10 00:07:28.834970] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:44.512 [2024-12-10 00:07:28.835020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441289 ] 00:25:44.512 [2024-12-10 00:07:28.915864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.512 [2024-12-10 00:07:28.955693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.769 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.769 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:44.769 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gjtnVYyZ5k 00:25:44.769 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:45.027 [2024-12-10 00:07:29.400432] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:45.027 [2024-12-10 00:07:29.408962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:45.027 [2024-12-10 00:07:29.409850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177e390 (107): Transport endpoint is not connected 00:25:45.027 [2024-12-10 00:07:29.410843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177e390 (9): Bad file descriptor 00:25:45.027 [2024-12-10 00:07:29.411845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:45.027 [2024-12-10 00:07:29.411857] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:45.027 [2024-12-10 00:07:29.411867] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:45.027 [2024-12-10 00:07:29.411880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:45.027 request: 00:25:45.027 { 00:25:45.027 "name": "TLSTEST", 00:25:45.027 "trtype": "tcp", 00:25:45.027 "traddr": "10.0.0.2", 00:25:45.027 "adrfam": "ipv4", 00:25:45.027 "trsvcid": "4420", 00:25:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.027 "prchk_reftag": false, 00:25:45.027 "prchk_guard": false, 00:25:45.027 "hdgst": false, 00:25:45.027 "ddgst": false, 00:25:45.027 "psk": "key0", 00:25:45.027 "allow_unrecognized_csi": false, 00:25:45.027 "method": "bdev_nvme_attach_controller", 00:25:45.027 "req_id": 1 00:25:45.027 } 00:25:45.027 Got JSON-RPC error response 00:25:45.027 response: 00:25:45.027 { 00:25:45.027 "code": -5, 00:25:45.027 "message": "Input/output error" 00:25:45.027 } 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 441289 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 441289 ']' 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 441289 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 441289 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 441289' 00:25:45.027 killing process with pid 441289 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 441289 00:25:45.027 Received shutdown signal, test time was about 10.000000 seconds 00:25:45.027 00:25:45.027 Latency(us) 00:25:45.027 [2024-12-09T23:07:29.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.027 [2024-12-09T23:07:29.500Z] =================================================================================================================== 00:25:45.027 [2024-12-09T23:07:29.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:45.027 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 441289 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vb48a25VMw 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vb48a25VMw 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Vb48a25VMw 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Vb48a25VMw 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=441518 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 441518 /var/tmp/bdevperf.sock 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 441518 ']' 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.285 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.285 [2024-12-10 00:07:29.696342] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:45.285 [2024-12-10 00:07:29.696392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441518 ] 00:25:45.544 [2024-12-10 00:07:29.775602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.544 [2024-12-10 00:07:29.815920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.544 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.544 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:45.544 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Vb48a25VMw 00:25:45.801 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:45.801 [2024-12-10 00:07:30.273367] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.058 [2024-12-10 00:07:30.281068] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:46.058 [2024-12-10 00:07:30.281099] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:46.058 [2024-12-10 00:07:30.281125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:46.058 [2024-12-10 00:07:30.281814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cc390 (107): Transport endpoint is not connected 00:25:46.058 [2024-12-10 00:07:30.282807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cc390 (9): Bad file descriptor 00:25:46.058 [2024-12-10 00:07:30.283809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:46.058 [2024-12-10 00:07:30.283820] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:46.058 [2024-12-10 00:07:30.283833] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:46.058 [2024-12-10 00:07:30.283846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:46.058 request: 00:25:46.058 { 00:25:46.058 "name": "TLSTEST", 00:25:46.058 "trtype": "tcp", 00:25:46.058 "traddr": "10.0.0.2", 00:25:46.058 "adrfam": "ipv4", 00:25:46.058 "trsvcid": "4420", 00:25:46.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.058 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:46.058 "prchk_reftag": false, 00:25:46.058 "prchk_guard": false, 00:25:46.058 "hdgst": false, 00:25:46.058 "ddgst": false, 00:25:46.058 "psk": "key0", 00:25:46.058 "allow_unrecognized_csi": false, 00:25:46.058 "method": "bdev_nvme_attach_controller", 00:25:46.058 "req_id": 1 00:25:46.058 } 00:25:46.058 Got JSON-RPC error response 00:25:46.058 response: 00:25:46.058 { 00:25:46.058 "code": -5, 00:25:46.058 "message": "Input/output error" 00:25:46.058 } 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 441518 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 441518 ']' 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 441518 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 441518 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 441518' 00:25:46.058 killing process with pid 441518 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 441518 00:25:46.058 Received shutdown signal, test time was about 10.000000 seconds 00:25:46.058 00:25:46.058 Latency(us) 00:25:46.058 [2024-12-09T23:07:30.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.058 [2024-12-09T23:07:30.531Z] =================================================================================================================== 00:25:46.058 [2024-12-09T23:07:30.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 441518 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:46.058 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vb48a25VMw 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vb48a25VMw 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.315 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vb48a25VMw 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Vb48a25VMw 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=441571 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 441571 /var/tmp/bdevperf.sock 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 441571 ']' 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.316 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.316 [2024-12-10 00:07:30.586866] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:46.316 [2024-12-10 00:07:30.586915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441571 ] 00:25:46.316 [2024-12-10 00:07:30.670061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.316 [2024-12-10 00:07:30.709951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.573 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.573 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:46.573 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Vb48a25VMw 00:25:46.573 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:46.831 [2024-12-10 00:07:31.154079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.831 [2024-12-10 00:07:31.163305] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:46.831 [2024-12-10 00:07:31.163329] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:46.831 [2024-12-10 00:07:31.163354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:46.831 [2024-12-10 00:07:31.163503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227390 (107): Transport endpoint is not connected 00:25:46.831 [2024-12-10 00:07:31.164496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227390 (9): Bad file descriptor 00:25:46.831 [2024-12-10 00:07:31.165497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:25:46.831 [2024-12-10 00:07:31.165511] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:46.831 [2024-12-10 00:07:31.165520] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:46.831 [2024-12-10 00:07:31.165532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:25:46.831 request: 00:25:46.831 { 00:25:46.831 "name": "TLSTEST", 00:25:46.831 "trtype": "tcp", 00:25:46.831 "traddr": "10.0.0.2", 00:25:46.831 "adrfam": "ipv4", 00:25:46.831 "trsvcid": "4420", 00:25:46.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:46.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.831 "prchk_reftag": false, 00:25:46.831 "prchk_guard": false, 00:25:46.831 "hdgst": false, 00:25:46.831 "ddgst": false, 00:25:46.831 "psk": "key0", 00:25:46.831 "allow_unrecognized_csi": false, 00:25:46.831 "method": "bdev_nvme_attach_controller", 00:25:46.831 "req_id": 1 00:25:46.831 } 00:25:46.831 Got JSON-RPC error response 00:25:46.831 response: 00:25:46.831 { 00:25:46.831 "code": -5, 00:25:46.831 "message": "Input/output error" 00:25:46.831 } 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 441571 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 441571 ']' 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 441571 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 441571 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 441571' 00:25:46.831 killing process with pid 441571 00:25:46.831 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 441571 00:25:46.831 Received shutdown signal, test time was about 10.000000 seconds 00:25:46.831 00:25:46.831 Latency(us) 00:25:46.831 [2024-12-09T23:07:31.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.831 [2024-12-09T23:07:31.304Z] =================================================================================================================== 00:25:46.831 [2024-12-09T23:07:31.305Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:46.832 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 441571 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=441834 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 441834 /var/tmp/bdevperf.sock 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 441834 ']' 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.090 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.090 [2024-12-10 00:07:31.455254] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:47.090 [2024-12-10 00:07:31.455304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441834 ] 00:25:47.090 [2024-12-10 00:07:31.538841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.348 [2024-12-10 00:07:31.577285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.930 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.930 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:47.930 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:48.188 [2024-12-10 00:07:32.450769] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:48.188 [2024-12-10 00:07:32.450799] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:48.188 request: 00:25:48.188 { 00:25:48.188 "name": "key0", 00:25:48.188 "path": "", 00:25:48.188 "method": "keyring_file_add_key", 00:25:48.188 "req_id": 1 00:25:48.188 } 00:25:48.188 Got JSON-RPC error response 00:25:48.188 response: 00:25:48.188 { 00:25:48.188 "code": -1, 00:25:48.188 "message": "Operation not permitted" 00:25:48.188 } 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:48.188 [2024-12-10 00:07:32.639343] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:48.188 [2024-12-10 00:07:32.639378] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:48.188 request: 00:25:48.188 { 00:25:48.188 "name": "TLSTEST", 00:25:48.188 "trtype": "tcp", 00:25:48.188 "traddr": "10.0.0.2", 00:25:48.188 "adrfam": "ipv4", 00:25:48.188 "trsvcid": "4420", 00:25:48.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.188 "prchk_reftag": false, 00:25:48.188 "prchk_guard": false, 00:25:48.188 "hdgst": false, 00:25:48.188 "ddgst": false, 00:25:48.188 "psk": "key0", 00:25:48.188 "allow_unrecognized_csi": false, 00:25:48.188 "method": "bdev_nvme_attach_controller", 00:25:48.188 "req_id": 1 00:25:48.188 } 00:25:48.188 Got JSON-RPC error response 00:25:48.188 response: 00:25:48.188 { 00:25:48.188 "code": -126, 00:25:48.188 "message": "Required key not available" 00:25:48.188 } 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 441834 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 441834 ']' 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 441834 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:48.188 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 441834 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 441834' 00:25:48.446 killing process with pid 441834 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 441834 00:25:48.446 Received shutdown signal, test time was about 10.000000 seconds 00:25:48.446 00:25:48.446 Latency(us) 00:25:48.446 [2024-12-09T23:07:32.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.446 [2024-12-09T23:07:32.919Z] =================================================================================================================== 00:25:48.446 [2024-12-09T23:07:32.919Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 441834 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 436759 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 436759 ']' 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 436759 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.446 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 436759 00:25:48.705 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.705 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.705 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 436759' 00:25:48.705 killing process with pid 436759 00:25:48.705 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 436759 00:25:48.705 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 436759 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lTb0vi6UrS 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lTb0vi6UrS 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=442125 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 442125 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 442125 ']' 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.705 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.965 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.965 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.965 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.965 [2024-12-10 00:07:33.224623] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:48.965 [2024-12-10 00:07:33.224674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.965 [2024-12-10 00:07:33.315894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.965 [2024-12-10 00:07:33.355305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.965 [2024-12-10 00:07:33.355343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.965 [2024-12-10 00:07:33.355355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.965 [2024-12-10 00:07:33.355363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.965 [2024-12-10 00:07:33.355370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.965 [2024-12-10 00:07:33.355990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lTb0vi6UrS 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:49.900 [2024-12-10 00:07:34.262464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.900 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:50.158 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:50.417 [2024-12-10 00:07:34.635410] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:50.417 [2024-12-10 00:07:34.635638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.417 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:50.417 malloc0 00:25:50.417 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:50.675 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:25:50.932 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTb0vi6UrS 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lTb0vi6UrS 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=442420 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 442420 /var/tmp/bdevperf.sock 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 442420 ']' 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.190 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.190 [2024-12-10 00:07:35.457071] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:25:51.190 [2024-12-10 00:07:35.457122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442420 ] 00:25:51.190 [2024-12-10 00:07:35.549380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.190 [2024-12-10 00:07:35.587582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.126 00:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.126 00:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:52.126 00:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:25:52.126 00:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:52.384 [2024-12-10 00:07:36.617948] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.384 TLSTESTn1 00:25:52.384 00:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:52.384 Running I/O for 10 seconds... 00:25:54.686 5101.00 IOPS, 19.93 MiB/s [2024-12-09T23:07:40.091Z] 4788.50 IOPS, 18.71 MiB/s [2024-12-09T23:07:41.024Z] 4899.33 IOPS, 19.14 MiB/s [2024-12-09T23:07:41.956Z] 5021.75 IOPS, 19.62 MiB/s [2024-12-09T23:07:42.888Z] 5058.20 IOPS, 19.76 MiB/s [2024-12-09T23:07:43.820Z] 5052.50 IOPS, 19.74 MiB/s [2024-12-09T23:07:45.239Z] 5110.14 IOPS, 19.96 MiB/s [2024-12-09T23:07:45.829Z] 5114.75 IOPS, 19.98 MiB/s [2024-12-09T23:07:46.823Z] 5149.67 IOPS, 20.12 MiB/s [2024-12-09T23:07:47.082Z] 5169.70 IOPS, 20.19 MiB/s 00:26:02.609 Latency(us) 00:26:02.609 [2024-12-09T23:07:47.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.609 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.609 Verification LBA range: start 0x0 length 0x2000 00:26:02.609 TLSTESTn1 : 10.02 5173.60 20.21 0.00 0.00 24701.75 5714.74 35651.58 00:26:02.609 [2024-12-09T23:07:47.082Z] =================================================================================================================== 00:26:02.609 [2024-12-09T23:07:47.082Z] Total : 5173.60 20.21 0.00 0.00 24701.75 5714.74 35651.58 00:26:02.609 { 00:26:02.609 "results": [ 00:26:02.609 { 00:26:02.609 "job": "TLSTESTn1", 00:26:02.609 "core_mask": "0x4", 00:26:02.609 "workload": "verify", 00:26:02.609 "status": "finished", 00:26:02.609 "verify_range": { 00:26:02.609 "start": 0, 00:26:02.609 "length": 8192 00:26:02.609 }, 00:26:02.609 "queue_depth": 128, 00:26:02.609 "io_size": 4096, 00:26:02.609 "runtime": 10.017194, 00:26:02.609 "iops": 5173.604504415109, 00:26:02.609 "mibps": 20.20939259537152, 00:26:02.609 "io_failed": 0, 00:26:02.609 "io_timeout": 0, 00:26:02.609 "avg_latency_us": 24701.750211241677, 00:26:02.609 "min_latency_us": 5714.7392, 00:26:02.609 "max_latency_us": 35651.584 00:26:02.609 } 00:26:02.609 ], 00:26:02.609 "core_count": 1 00:26:02.609 } 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 442420 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 442420 ']' 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 442420 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442420 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442420' 00:26:02.609 killing process with pid 442420 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 442420 00:26:02.609 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.609 00:26:02.609 Latency(us) 00:26:02.609 [2024-12-09T23:07:47.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.609 [2024-12-09T23:07:47.082Z] =================================================================================================================== 00:26:02.609 [2024-12-09T23:07:47.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.609 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 442420 00:26:02.609 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lTb0vi6UrS 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTb0vi6UrS 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTb0vi6UrS 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lTb0vi6UrS 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lTb0vi6UrS 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=444407 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 444407 /var/tmp/bdevperf.sock 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 444407 ']' 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.867 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.868 [2024-12-10 00:07:47.136444] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:02.868 [2024-12-10 00:07:47.136494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444407 ] 00:26:02.868 [2024-12-10 00:07:47.225352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.868 [2024-12-10 00:07:47.265603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.126 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.126 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:03.126 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:03.126 [2024-12-10 00:07:47.526498] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lTb0vi6UrS': 0100666 00:26:03.126 [2024-12-10 00:07:47.526529] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:03.126 request: 00:26:03.126 { 00:26:03.126 "name": "key0", 00:26:03.126 "path": "/tmp/tmp.lTb0vi6UrS", 00:26:03.126 "method": "keyring_file_add_key", 00:26:03.126 "req_id": 1 00:26:03.126 } 00:26:03.126 Got JSON-RPC error response 00:26:03.126 response: 00:26:03.126 { 00:26:03.126 "code": -1, 00:26:03.126 "message": "Operation not permitted" 00:26:03.126 } 00:26:03.126 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:03.385 [2024-12-10 00:07:47.707048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:03.385 [2024-12-10 00:07:47.707085] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:26:03.385 request: 00:26:03.385 { 00:26:03.385 "name": "TLSTEST", 00:26:03.385 "trtype": "tcp", 00:26:03.385 "traddr": "10.0.0.2", 00:26:03.385 "adrfam": "ipv4", 00:26:03.385 "trsvcid": "4420", 00:26:03.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:03.385 "prchk_reftag": false, 00:26:03.385 "prchk_guard": false, 00:26:03.385 "hdgst": false, 00:26:03.385 "ddgst": false, 00:26:03.385 "psk": "key0", 00:26:03.385 "allow_unrecognized_csi": false, 00:26:03.385 "method": "bdev_nvme_attach_controller", 00:26:03.385 "req_id": 1 00:26:03.385 } 00:26:03.385 Got JSON-RPC error response 00:26:03.385 response: 00:26:03.385 { 00:26:03.385 "code": -126, 00:26:03.385 "message": "Required key not available" 00:26:03.385 } 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 444407 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 444407 ']' 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 444407 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444407 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444407' 00:26:03.386 killing process with pid 444407 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 444407 00:26:03.386 Received shutdown signal, test time was about 10.000000 seconds 00:26:03.386 00:26:03.386 Latency(us) 00:26:03.386 [2024-12-09T23:07:47.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.386 [2024-12-09T23:07:47.859Z] =================================================================================================================== 00:26:03.386 [2024-12-09T23:07:47.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:03.386 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 444407 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 442125 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 442125 ']' 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 442125 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.644 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442125 00:26:03.644 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:03.644 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:03.644 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442125' 00:26:03.644 killing process with pid 442125 00:26:03.644 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 442125 00:26:03.644 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 442125 00:26:03.902 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:26:03.902 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.902 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.902 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.902 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=444563 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 444563 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 444563 ']' 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.903 00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.903 [2024-12-10 00:07:48.252909] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:03.903 [2024-12-10 00:07:48.252962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.903 [2024-12-10 00:07:48.347674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.160 [2024-12-10 00:07:48.387970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.160 [2024-12-10 00:07:48.388007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.160 [2024-12-10 00:07:48.388017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.160 [2024-12-10 00:07:48.388025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.160 [2024-12-10 00:07:48.388032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.160 [2024-12-10 00:07:48.388612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lTb0vi6UrS 00:26:04.726 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:04.983 [2024-12-10 00:07:49.310636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.983 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:05.241 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:05.241 [2024-12-10 00:07:49.695602] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:05.242 [2024-12-10 00:07:49.695803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.500 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:05.500 malloc0 00:26:05.500 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:05.757 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:06.015 [2024-12-10 00:07:50.297179] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lTb0vi6UrS': 0100666 00:26:06.015 [2024-12-10 00:07:50.297211] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:06.015 request: 00:26:06.015 { 00:26:06.015 "name": "key0", 00:26:06.015 "path": "/tmp/tmp.lTb0vi6UrS", 00:26:06.015 "method": "keyring_file_add_key", 00:26:06.015 "req_id": 1 00:26:06.015 } 00:26:06.015 Got JSON-RPC error response 00:26:06.015 response: 00:26:06.015 { 00:26:06.015 "code": -1, 00:26:06.015 "message": "Operation not permitted" 00:26:06.015 } 00:26:06.015 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.274 [2024-12-10 00:07:50.489706] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:26:06.274 [2024-12-10 00:07:50.489744] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:06.274 request: 00:26:06.274 { 00:26:06.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.274 "host": "nqn.2016-06.io.spdk:host1", 00:26:06.274 "psk": "key0", 00:26:06.274 "method": "nvmf_subsystem_add_host", 00:26:06.274 "req_id": 1 00:26:06.274 } 00:26:06.274 Got JSON-RPC error response 00:26:06.274 response: 00:26:06.274 { 00:26:06.274 "code": -32603, 00:26:06.274 "message": "Internal error" 00:26:06.274 } 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 444563 ']' 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444563' 00:26:06.274 killing process with pid 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 444563 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lTb0vi6UrS 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.274 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=445111 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 445111 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 445111 ']' 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.534 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:06.534 [2024-12-10 00:07:50.797350] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:06.534 [2024-12-10 00:07:50.797398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.534 [2024-12-10 00:07:50.880058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.534 [2024-12-10 00:07:50.918125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.534 [2024-12-10 00:07:50.918160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.534 [2024-12-10 00:07:50.918169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.534 [2024-12-10 00:07:50.918178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.534 [2024-12-10 00:07:50.918185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.534 [2024-12-10 00:07:50.918750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lTb0vi6UrS 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:06.793 [2024-12-10 00:07:51.225911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.793 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:07.050 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:07.309 [2024-12-10 00:07:51.602837] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:07.309 [2024-12-10 00:07:51.603045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.309 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:07.567 malloc0 00:26:07.567 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:07.567 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:07.826 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:08.084 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:08.084 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=445401 00:26:08.084 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 445401 /var/tmp/bdevperf.sock 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 445401 ']' 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.085 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.085 [2024-12-10 00:07:52.404739] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:08.085 [2024-12-10 00:07:52.404786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445401 ] 00:26:08.085 [2024-12-10 00:07:52.495001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.085 [2024-12-10 00:07:52.534394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.343 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.343 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:08.343 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:08.343 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:08.599 [2024-12-10 00:07:52.966796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:08.599 TLSTESTn1 00:26:08.599 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:08.857 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:26:08.857 "subsystems": [ 00:26:08.857 { 00:26:08.857 "subsystem": "keyring", 00:26:08.857 "config": [ 00:26:08.857 { 00:26:08.857 "method": "keyring_file_add_key", 00:26:08.857 "params": { 00:26:08.857 "name": "key0", 00:26:08.857 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:08.857 } 00:26:08.857 } 00:26:08.857 ] 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "subsystem": "iobuf", 00:26:08.857 "config": [ 00:26:08.857 { 00:26:08.857 "method": "iobuf_set_options", 00:26:08.857 "params": { 00:26:08.857 "small_pool_count": 8192, 00:26:08.857 "large_pool_count": 1024, 00:26:08.857 "small_bufsize": 8192, 00:26:08.857 "large_bufsize": 135168, 00:26:08.857 "enable_numa": false 00:26:08.857 } 00:26:08.857 } 00:26:08.857 ] 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "subsystem": "sock", 00:26:08.857 "config": [ 00:26:08.857 { 00:26:08.857 "method": "sock_set_default_impl", 00:26:08.857 "params": { 00:26:08.857 "impl_name": "posix" 00:26:08.857 } 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "method": "sock_impl_set_options", 00:26:08.857 "params": { 00:26:08.857 "impl_name": "ssl", 00:26:08.857 "recv_buf_size": 4096, 00:26:08.857 "send_buf_size": 4096, 00:26:08.857 "enable_recv_pipe": true, 00:26:08.857 "enable_quickack": false, 00:26:08.857 "enable_placement_id": 0, 00:26:08.857 "enable_zerocopy_send_server": true, 00:26:08.857 "enable_zerocopy_send_client": false, 00:26:08.857 "zerocopy_threshold": 0, 00:26:08.857 "tls_version": 0, 00:26:08.857 "enable_ktls": false 00:26:08.857 } 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "method": "sock_impl_set_options", 00:26:08.857 "params": { 00:26:08.857 "impl_name": "posix", 00:26:08.857 "recv_buf_size": 2097152, 00:26:08.857 "send_buf_size": 2097152, 00:26:08.857 "enable_recv_pipe": true, 00:26:08.857 "enable_quickack": false, 00:26:08.857 "enable_placement_id": 0, 00:26:08.857 "enable_zerocopy_send_server": true, 00:26:08.857 "enable_zerocopy_send_client": false, 00:26:08.857 "zerocopy_threshold": 0, 00:26:08.857 "tls_version": 0, 00:26:08.857 "enable_ktls": false 00:26:08.857 } 00:26:08.857 } 00:26:08.857 ] 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "subsystem": "vmd", 00:26:08.857 "config": [] 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "subsystem": "accel", 00:26:08.857 "config": [ 00:26:08.857 { 00:26:08.857 "method": "accel_set_options", 00:26:08.857 "params": { 00:26:08.857 "small_cache_size": 128, 00:26:08.857 "large_cache_size": 16, 00:26:08.857 "task_count": 2048, 00:26:08.857 "sequence_count": 2048, 00:26:08.857 "buf_count": 2048 00:26:08.857 } 00:26:08.857 } 00:26:08.857 ] 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "subsystem": "bdev", 00:26:08.857 "config": [ 00:26:08.857 { 00:26:08.857 "method": "bdev_set_options", 00:26:08.857 "params": { 00:26:08.857 "bdev_io_pool_size": 65535, 00:26:08.857 "bdev_io_cache_size": 256, 00:26:08.857 "bdev_auto_examine": true, 00:26:08.857 "iobuf_small_cache_size": 128, 00:26:08.857 "iobuf_large_cache_size": 16 00:26:08.857 } 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "method": "bdev_raid_set_options", 00:26:08.857 "params": { 00:26:08.857 "process_window_size_kb": 1024, 00:26:08.857 "process_max_bandwidth_mb_sec": 0 00:26:08.857 } 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "method": "bdev_iscsi_set_options", 00:26:08.857 "params": { 00:26:08.857 "timeout_sec": 30 00:26:08.857 } 00:26:08.857 }, 00:26:08.857 { 00:26:08.857 "method": "bdev_nvme_set_options", 00:26:08.857 "params": { 00:26:08.857 "action_on_timeout": "none", 00:26:08.857 "timeout_us": 0, 00:26:08.857 "timeout_admin_us": 0, 00:26:08.857 "keep_alive_timeout_ms": 10000, 00:26:08.857 "arbitration_burst": 0, 00:26:08.857 "low_priority_weight": 0, 00:26:08.857 "medium_priority_weight": 0, 00:26:08.857 "high_priority_weight": 0, 00:26:08.857 "nvme_adminq_poll_period_us": 10000, 00:26:08.857 "nvme_ioq_poll_period_us": 0, 00:26:08.857 "io_queue_requests": 0, 00:26:08.857 "delay_cmd_submit": true, 00:26:08.857 "transport_retry_count": 4, 00:26:08.857 "bdev_retry_count": 3, 00:26:08.857 "transport_ack_timeout": 0, 00:26:08.857 "ctrlr_loss_timeout_sec": 0, 00:26:08.857 "reconnect_delay_sec": 0, 00:26:08.857 "fast_io_fail_timeout_sec": 0, 00:26:08.857 "disable_auto_failback": false, 00:26:08.857 "generate_uuids": false, 00:26:08.857 "transport_tos": 0, 00:26:08.857 "nvme_error_stat": false, 00:26:08.857 "rdma_srq_size": 0, 00:26:08.857 "io_path_stat": false, 00:26:08.857 "allow_accel_sequence": false, 00:26:08.857 "rdma_max_cq_size": 0, 00:26:08.858 "rdma_cm_event_timeout_ms": 0, 00:26:08.858 "dhchap_digests": [ 00:26:08.858 "sha256", 00:26:08.858 "sha384", 00:26:08.858 "sha512" 00:26:08.858 ], 00:26:08.858 "dhchap_dhgroups": [ 00:26:08.858 "null", 00:26:08.858 "ffdhe2048", 00:26:08.858 "ffdhe3072", 00:26:08.858 "ffdhe4096", 00:26:08.858 "ffdhe6144", 00:26:08.858 "ffdhe8192" 00:26:08.858 ] 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "bdev_nvme_set_hotplug", 00:26:08.858 "params": { 00:26:08.858 "period_us": 100000, 00:26:08.858 "enable": false 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "bdev_malloc_create", 00:26:08.858 "params": { 00:26:08.858 "name": "malloc0", 00:26:08.858 "num_blocks": 8192, 00:26:08.858 "block_size": 4096, 00:26:08.858 "physical_block_size": 4096, 00:26:08.858 "uuid": "8025ea90-bd2f-4b65-87a2-ae71824160f0", 00:26:08.858 "optimal_io_boundary": 0, 00:26:08.858 "md_size": 0, 00:26:08.858 "dif_type": 0, 00:26:08.858 "dif_is_head_of_md": false, 00:26:08.858 "dif_pi_format": 0 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "bdev_wait_for_examine" 00:26:08.858 } 00:26:08.858 ] 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "subsystem": "nbd", 00:26:08.858 "config": [] 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "subsystem": "scheduler", 00:26:08.858 "config": [ 00:26:08.858 { 00:26:08.858 "method": "framework_set_scheduler", 00:26:08.858 "params": { 00:26:08.858 "name": "static" 00:26:08.858 } 00:26:08.858 } 00:26:08.858 ] 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "subsystem": "nvmf", 00:26:08.858 "config": [ 00:26:08.858 { 00:26:08.858 "method": "nvmf_set_config", 00:26:08.858 "params": { 00:26:08.858 "discovery_filter": "match_any", 00:26:08.858 "admin_cmd_passthru": { 00:26:08.858 "identify_ctrlr": false 00:26:08.858 }, 00:26:08.858 "dhchap_digests": [ 00:26:08.858 "sha256", 00:26:08.858 "sha384", 00:26:08.858 "sha512" 00:26:08.858 ], 00:26:08.858 "dhchap_dhgroups": [ 00:26:08.858 "null", 00:26:08.858 "ffdhe2048", 00:26:08.858 "ffdhe3072", 00:26:08.858 "ffdhe4096", 00:26:08.858 "ffdhe6144", 00:26:08.858 "ffdhe8192" 00:26:08.858 ] 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_set_max_subsystems", 00:26:08.858 "params": { 00:26:08.858 "max_subsystems": 1024 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_set_crdt", 00:26:08.858 "params": { 00:26:08.858 "crdt1": 0, 00:26:08.858 "crdt2": 0, 00:26:08.858 "crdt3": 0 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_create_transport", 00:26:08.858 "params": { 00:26:08.858 "trtype": "TCP", 00:26:08.858 "max_queue_depth": 128, 00:26:08.858 "max_io_qpairs_per_ctrlr": 127, 00:26:08.858 "in_capsule_data_size": 4096, 00:26:08.858 "max_io_size": 131072, 00:26:08.858 "io_unit_size": 131072, 00:26:08.858 "max_aq_depth": 128, 00:26:08.858 "num_shared_buffers": 511, 00:26:08.858 "buf_cache_size": 4294967295, 00:26:08.858 "dif_insert_or_strip": false, 00:26:08.858 "zcopy": false, 00:26:08.858 "c2h_success": false, 00:26:08.858 "sock_priority": 0, 00:26:08.858 "abort_timeout_sec": 1, 00:26:08.858 "ack_timeout": 0, 00:26:08.858 "data_wr_pool_size": 0 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_create_subsystem", 00:26:08.858 "params": { 00:26:08.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.858 "allow_any_host": false, 00:26:08.858 "serial_number": "SPDK00000000000001", 00:26:08.858 "model_number": "SPDK bdev Controller", 00:26:08.858 "max_namespaces": 10, 00:26:08.858 "min_cntlid": 1, 00:26:08.858 "max_cntlid": 65519, 00:26:08.858 "ana_reporting": false 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_subsystem_add_host", 00:26:08.858 "params": { 00:26:08.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.858 "host": "nqn.2016-06.io.spdk:host1", 00:26:08.858 "psk": "key0" 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_subsystem_add_ns", 00:26:08.858 "params": { 00:26:08.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.858 "namespace": { 00:26:08.858 "nsid": 1, 00:26:08.858 "bdev_name": "malloc0", 00:26:08.858 "nguid": "8025EA90BD2F4B6587A2AE71824160F0", 00:26:08.858 "uuid": "8025ea90-bd2f-4b65-87a2-ae71824160f0", 00:26:08.858 "no_auto_visible": false 00:26:08.858 } 00:26:08.858 } 00:26:08.858 }, 00:26:08.858 { 00:26:08.858 "method": "nvmf_subsystem_add_listener", 00:26:08.858 "params": { 00:26:08.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.858 "listen_address": { 00:26:08.858 "trtype": "TCP", 00:26:08.858 "adrfam": "IPv4", 00:26:08.858 "traddr": "10.0.0.2", 00:26:08.858 "trsvcid": "4420" 00:26:08.858 }, 00:26:08.858 "secure_channel": true 00:26:08.858 } 00:26:08.858 } 00:26:08.858 ] 00:26:08.858 } 00:26:08.858 ] 00:26:08.858 }' 00:26:08.858 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:09.429 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:26:09.429 "subsystems": [ 00:26:09.429 { 00:26:09.429 "subsystem": "keyring", 00:26:09.429 "config": [ 00:26:09.429 { 00:26:09.429 "method": "keyring_file_add_key", 00:26:09.429 "params": { 00:26:09.429 "name": "key0", 00:26:09.429 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:09.429 } 00:26:09.429 } 00:26:09.429 ] 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "subsystem": "iobuf", 00:26:09.429 "config": [ 00:26:09.429 { 00:26:09.429 "method": "iobuf_set_options", 00:26:09.429 "params": { 00:26:09.429 "small_pool_count": 8192, 00:26:09.429 "large_pool_count": 1024, 00:26:09.429 "small_bufsize": 8192, 00:26:09.429 "large_bufsize": 135168, 00:26:09.429 "enable_numa": false 00:26:09.429 } 00:26:09.429 } 00:26:09.429 ] 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "subsystem": "sock", 00:26:09.429 "config": [ 00:26:09.429 { 00:26:09.429 "method": "sock_set_default_impl", 00:26:09.429 "params": { 00:26:09.429 "impl_name": "posix" 00:26:09.429 } 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "method": "sock_impl_set_options", 00:26:09.429 "params": { 00:26:09.429 "impl_name": "ssl", 00:26:09.429 "recv_buf_size": 4096, 00:26:09.429 "send_buf_size": 4096, 00:26:09.429 "enable_recv_pipe": true, 00:26:09.429 "enable_quickack": false, 00:26:09.429 "enable_placement_id": 0, 00:26:09.429 "enable_zerocopy_send_server": true, 00:26:09.429 "enable_zerocopy_send_client": false, 00:26:09.429 "zerocopy_threshold": 0, 00:26:09.429 "tls_version": 0, 00:26:09.429 "enable_ktls": false 00:26:09.429 } 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "method": "sock_impl_set_options", 00:26:09.429 "params": { 00:26:09.429 "impl_name": "posix", 00:26:09.429 "recv_buf_size": 2097152, 00:26:09.429 "send_buf_size": 2097152, 00:26:09.429 "enable_recv_pipe": true, 00:26:09.429 "enable_quickack": false, 00:26:09.429 "enable_placement_id": 0, 00:26:09.429 "enable_zerocopy_send_server": true, 00:26:09.429 "enable_zerocopy_send_client": false, 00:26:09.429 "zerocopy_threshold": 0, 00:26:09.429 "tls_version": 0, 00:26:09.429 "enable_ktls": false 00:26:09.429 } 00:26:09.429 } 00:26:09.429 ] 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "subsystem": "vmd", 00:26:09.429 "config": [] 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "subsystem": "accel", 00:26:09.429 "config": [ 00:26:09.429 { 00:26:09.429 "method": "accel_set_options", 00:26:09.429 "params": { 00:26:09.429 "small_cache_size": 128, 00:26:09.429 "large_cache_size": 16, 00:26:09.429 "task_count": 2048, 00:26:09.429 "sequence_count": 2048, 00:26:09.429 "buf_count": 2048 00:26:09.429 } 00:26:09.429 } 00:26:09.429 ] 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "subsystem": "bdev", 00:26:09.429 "config": [ 00:26:09.429 { 00:26:09.429 "method": "bdev_set_options", 00:26:09.429 "params": { 00:26:09.429 "bdev_io_pool_size": 65535, 00:26:09.429 "bdev_io_cache_size": 256, 00:26:09.429 "bdev_auto_examine": true, 00:26:09.429 "iobuf_small_cache_size": 128, 00:26:09.429 "iobuf_large_cache_size": 16 00:26:09.429 } 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "method": "bdev_raid_set_options", 00:26:09.429 "params": { 00:26:09.429 "process_window_size_kb": 1024, 00:26:09.429 "process_max_bandwidth_mb_sec": 0 00:26:09.429 } 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "method": "bdev_iscsi_set_options", 00:26:09.429 "params": { 00:26:09.429 "timeout_sec": 30 00:26:09.429 } 00:26:09.429 }, 00:26:09.429 { 00:26:09.429 "method": "bdev_nvme_set_options", 00:26:09.429 "params": { 00:26:09.429 "action_on_timeout": "none", 00:26:09.429 "timeout_us": 0, 00:26:09.429 "timeout_admin_us": 0, 00:26:09.429 "keep_alive_timeout_ms": 10000, 00:26:09.429 "arbitration_burst": 0, 00:26:09.429 "low_priority_weight": 0, 00:26:09.429 "medium_priority_weight": 0, 00:26:09.429 "high_priority_weight": 0, 00:26:09.429 "nvme_adminq_poll_period_us": 10000, 00:26:09.429 "nvme_ioq_poll_period_us": 0, 00:26:09.430 "io_queue_requests": 512, 00:26:09.430 "delay_cmd_submit": true, 00:26:09.430 "transport_retry_count": 4, 00:26:09.430 "bdev_retry_count": 3, 00:26:09.430 "transport_ack_timeout": 0, 00:26:09.430 "ctrlr_loss_timeout_sec": 0, 00:26:09.430 "reconnect_delay_sec": 0, 00:26:09.430 "fast_io_fail_timeout_sec": 0, 00:26:09.430 "disable_auto_failback": false, 00:26:09.430 "generate_uuids": false, 00:26:09.430 "transport_tos": 0, 00:26:09.430 "nvme_error_stat": false, 00:26:09.430 "rdma_srq_size": 0, 00:26:09.430 "io_path_stat": false, 00:26:09.430 "allow_accel_sequence": false, 00:26:09.430 "rdma_max_cq_size": 0, 00:26:09.430 "rdma_cm_event_timeout_ms": 0, 00:26:09.430 "dhchap_digests": [ 00:26:09.430 "sha256", 00:26:09.430 "sha384", 00:26:09.430 "sha512" 00:26:09.430 ], 00:26:09.430 "dhchap_dhgroups": [ 00:26:09.430 "null", 00:26:09.430 "ffdhe2048", 00:26:09.430 "ffdhe3072", 00:26:09.430 "ffdhe4096", 00:26:09.430 "ffdhe6144", 00:26:09.430 "ffdhe8192" 00:26:09.430 ] 00:26:09.430 } 00:26:09.430 }, 00:26:09.430 { 00:26:09.430 "method": "bdev_nvme_attach_controller", 00:26:09.430 "params": { 00:26:09.430 "name": "TLSTEST", 00:26:09.430 "trtype": "TCP", 00:26:09.430 "adrfam": "IPv4", 00:26:09.430 "traddr": "10.0.0.2", 00:26:09.430 "trsvcid": "4420", 00:26:09.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.430 "prchk_reftag": false, 00:26:09.430 "prchk_guard": false, 00:26:09.430 "ctrlr_loss_timeout_sec": 0, 00:26:09.430 "reconnect_delay_sec": 0, 00:26:09.430 "fast_io_fail_timeout_sec": 0, 00:26:09.430 "psk": "key0", 00:26:09.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.430 "hdgst": false, 00:26:09.430 "ddgst": false, 00:26:09.430 "multipath": "multipath" 00:26:09.430 } 00:26:09.430 }, 00:26:09.430 { 00:26:09.430 "method": "bdev_nvme_set_hotplug", 00:26:09.430 "params": { 00:26:09.430 "period_us": 100000, 00:26:09.430 "enable": false 00:26:09.430 } 00:26:09.430 }, 00:26:09.430 { 00:26:09.430 "method": "bdev_wait_for_examine" 00:26:09.430 } 00:26:09.430 ] 00:26:09.430 }, 00:26:09.430 { 00:26:09.430 "subsystem": "nbd", 00:26:09.430 "config": [] 00:26:09.430 } 00:26:09.430 ] 00:26:09.430 }' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 445401 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 445401 ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 445401 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445401 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445401' 00:26:09.430 killing process with pid 445401 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 445401 00:26:09.430 Received shutdown signal, test time was about 10.000000 seconds 00:26:09.430 00:26:09.430 Latency(us) 00:26:09.430 [2024-12-09T23:07:53.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.430 [2024-12-09T23:07:53.903Z] =================================================================================================================== 00:26:09.430 [2024-12-09T23:07:53.903Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 445401 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 445111 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 445111 ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 445111 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445111 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445111' 00:26:09.430 killing process with pid 445111 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 445111 00:26:09.430 00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 445111 00:26:09.691 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:09.691 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:09.691 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.691 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:26:09.691 "subsystems": [ 00:26:09.691 { 00:26:09.691 "subsystem": "keyring", 00:26:09.691 "config": [ 00:26:09.691 { 00:26:09.691 "method": "keyring_file_add_key", 00:26:09.691 "params": { 00:26:09.691 "name": "key0", 00:26:09.691 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:09.691 } 00:26:09.691 } 00:26:09.691 ] 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "subsystem": "iobuf", 00:26:09.691 "config": [ 00:26:09.691 { 00:26:09.691 "method": "iobuf_set_options", 00:26:09.691 "params": { 00:26:09.691 "small_pool_count": 8192, 00:26:09.691 "large_pool_count": 1024, 00:26:09.691 "small_bufsize": 8192, 00:26:09.691 "large_bufsize": 135168, 00:26:09.691 "enable_numa": false 00:26:09.691 } 00:26:09.691 } 00:26:09.691 ] 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "subsystem": "sock", 00:26:09.691 "config": [ 00:26:09.691 { 00:26:09.691 "method": "sock_set_default_impl", 00:26:09.691 "params": { 00:26:09.691 "impl_name": "posix" 00:26:09.691 } 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "method": "sock_impl_set_options", 00:26:09.691 "params": { 00:26:09.691 "impl_name": "ssl", 00:26:09.691 "recv_buf_size": 4096, 00:26:09.691 "send_buf_size": 4096, 00:26:09.691 "enable_recv_pipe": true, 00:26:09.691 "enable_quickack": false, 00:26:09.691 "enable_placement_id": 0, 00:26:09.691 "enable_zerocopy_send_server": true, 00:26:09.691 "enable_zerocopy_send_client": false, 00:26:09.691 "zerocopy_threshold": 0, 00:26:09.691 "tls_version": 0, 00:26:09.691 "enable_ktls": false 00:26:09.691 } 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "method": "sock_impl_set_options", 00:26:09.691 "params": { 00:26:09.691 "impl_name": "posix", 00:26:09.691 "recv_buf_size": 2097152, 00:26:09.691 "send_buf_size": 2097152, 00:26:09.691 "enable_recv_pipe": true, 00:26:09.691 "enable_quickack": false, 00:26:09.691 "enable_placement_id": 0, 00:26:09.691 "enable_zerocopy_send_server": true, 00:26:09.691 "enable_zerocopy_send_client": false, 00:26:09.691 "zerocopy_threshold": 0, 00:26:09.691 "tls_version": 0, 00:26:09.691 "enable_ktls": false 00:26:09.691 } 00:26:09.691 } 00:26:09.691 ] 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "subsystem": "vmd", 00:26:09.691 "config": [] 00:26:09.691 }, 00:26:09.691 { 00:26:09.691 "subsystem": "accel", 00:26:09.691 "config": [ 00:26:09.691 { 00:26:09.691 "method": "accel_set_options", 00:26:09.691 "params": { 00:26:09.691 "small_cache_size": 128, 00:26:09.692 "large_cache_size": 16, 00:26:09.692 "task_count": 2048, 00:26:09.692 "sequence_count": 2048, 00:26:09.692 "buf_count": 2048 00:26:09.692 } 00:26:09.692 } 00:26:09.692 ] 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "subsystem": "bdev", 00:26:09.692 "config": [ 00:26:09.692 { 00:26:09.692 "method": "bdev_set_options", 00:26:09.692 "params": { 00:26:09.692 "bdev_io_pool_size": 65535, 00:26:09.692 "bdev_io_cache_size": 256, 00:26:09.692 "bdev_auto_examine": true, 00:26:09.692 "iobuf_small_cache_size": 128, 00:26:09.692 "iobuf_large_cache_size": 16 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_raid_set_options", 00:26:09.692 "params": { 00:26:09.692 "process_window_size_kb": 1024, 00:26:09.692 "process_max_bandwidth_mb_sec": 0 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_iscsi_set_options", 00:26:09.692 "params": { 00:26:09.692 "timeout_sec": 30 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_nvme_set_options", 00:26:09.692 "params": { 00:26:09.692 "action_on_timeout": "none", 00:26:09.692 "timeout_us": 0, 00:26:09.692 "timeout_admin_us": 0, 00:26:09.692 "keep_alive_timeout_ms": 10000, 00:26:09.692 "arbitration_burst": 0, 00:26:09.692 "low_priority_weight": 0, 00:26:09.692 "medium_priority_weight": 0, 00:26:09.692 "high_priority_weight": 0, 00:26:09.692 "nvme_adminq_poll_period_us": 10000, 00:26:09.692 "nvme_ioq_poll_period_us": 0, 00:26:09.692 "io_queue_requests": 0, 00:26:09.692 "delay_cmd_submit": true, 00:26:09.692 "transport_retry_count": 4, 00:26:09.692 "bdev_retry_count": 3, 00:26:09.692 "transport_ack_timeout": 0, 00:26:09.692 "ctrlr_loss_timeout_sec": 0, 00:26:09.692 "reconnect_delay_sec": 0, 00:26:09.692 "fast_io_fail_timeout_sec": 0, 00:26:09.692 "disable_auto_failback": false, 00:26:09.692 "generate_uuids": false, 00:26:09.692 "transport_tos": 0, 00:26:09.692 "nvme_error_stat": false, 00:26:09.692 "rdma_srq_size": 0, 00:26:09.692 "io_path_stat": false, 00:26:09.692 "allow_accel_sequence": false, 00:26:09.692 "rdma_max_cq_size": 0, 00:26:09.692 "rdma_cm_event_timeout_ms": 0, 00:26:09.692 "dhchap_digests": [ 00:26:09.692 "sha256", 00:26:09.692 "sha384", 00:26:09.692 "sha512" 00:26:09.692 ], 00:26:09.692 "dhchap_dhgroups": [ 00:26:09.692 "null", 00:26:09.692 "ffdhe2048", 00:26:09.692 "ffdhe3072", 00:26:09.692 "ffdhe4096", 00:26:09.692 "ffdhe6144", 00:26:09.692 "ffdhe8192" 00:26:09.692 ] 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_nvme_set_hotplug", 00:26:09.692 "params": { 00:26:09.692 "period_us": 100000, 00:26:09.692 "enable": false 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_malloc_create", 00:26:09.692 "params": { 00:26:09.692 "name": "malloc0", 00:26:09.692 "num_blocks": 8192, 00:26:09.692 "block_size": 4096, 00:26:09.692 "physical_block_size": 4096, 00:26:09.692 "uuid": "8025ea90-bd2f-4b65-87a2-ae71824160f0", 00:26:09.692 "optimal_io_boundary": 0, 00:26:09.692 "md_size": 0, 00:26:09.692 "dif_type": 0, 00:26:09.692 "dif_is_head_of_md": false, 00:26:09.692 "dif_pi_format": 0 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "bdev_wait_for_examine" 00:26:09.692 } 00:26:09.692 ] 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "subsystem": "nbd", 00:26:09.692 "config": [] 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "subsystem": "scheduler", 00:26:09.692 "config": [ 00:26:09.692 { 00:26:09.692 "method": "framework_set_scheduler", 00:26:09.692 "params": { 00:26:09.692 "name": "static" 00:26:09.692 } 00:26:09.692 } 00:26:09.692 ] 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "subsystem": "nvmf", 00:26:09.692 "config": [ 00:26:09.692 { 00:26:09.692 "method": "nvmf_set_config", 00:26:09.692 "params": { 00:26:09.692 "discovery_filter": "match_any", 00:26:09.692 "admin_cmd_passthru": { 00:26:09.692 "identify_ctrlr": false 00:26:09.692 }, 00:26:09.692 "dhchap_digests": [ 00:26:09.692 "sha256", 00:26:09.692 "sha384", 00:26:09.692 "sha512" 00:26:09.692 ], 00:26:09.692 "dhchap_dhgroups": [ 00:26:09.692 "null", 00:26:09.692 "ffdhe2048", 00:26:09.692 "ffdhe3072", 00:26:09.692 "ffdhe4096", 00:26:09.692 "ffdhe6144", 00:26:09.692 "ffdhe8192" 00:26:09.692 ] 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_set_max_subsystems", 00:26:09.692 "params": { 00:26:09.692 "max_subsystems": 1024 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_set_crdt", 00:26:09.692 "params": { 00:26:09.692 "crdt1": 0, 00:26:09.692 "crdt2": 0, 00:26:09.692 "crdt3": 0 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_create_transport", 00:26:09.692 "params": { 00:26:09.692 "trtype": "TCP", 00:26:09.692 "max_queue_depth": 128, 00:26:09.692 "max_io_qpairs_per_ctrlr": 127, 00:26:09.692 "in_capsule_data_size": 4096, 00:26:09.692 "max_io_size": 131072, 00:26:09.692 "io_unit_size": 131072, 00:26:09.692 "max_aq_depth": 128, 00:26:09.692 "num_shared_buffers": 511, 00:26:09.692 "buf_cache_size": 4294967295, 00:26:09.692 "dif_insert_or_strip": false, 00:26:09.692 "zcopy": false, 00:26:09.692 "c2h_success": false, 00:26:09.692 "sock_priority": 0, 00:26:09.692 "abort_timeout_sec": 1, 00:26:09.692 "ack_timeout": 0, 00:26:09.692 "data_wr_pool_size": 0 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_create_subsystem", 00:26:09.692 "params": { 00:26:09.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.692 "allow_any_host": false, 00:26:09.692 "serial_number": "SPDK00000000000001", 00:26:09.692 "model_number": "SPDK bdev Controller", 00:26:09.692 "max_namespaces": 10, 00:26:09.692 "min_cntlid": 1, 00:26:09.692 "max_cntlid": 65519, 00:26:09.692 "ana_reporting": false 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_subsystem_add_host", 00:26:09.692 "params": { 00:26:09.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.692 "host": "nqn.2016-06.io.spdk:host1", 00:26:09.692 "psk": "key0" 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_subsystem_add_ns", 00:26:09.692 "params": { 00:26:09.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.692 "namespace": { 00:26:09.692 "nsid": 1, 00:26:09.692 "bdev_name": "malloc0", 00:26:09.692 "nguid": "8025EA90BD2F4B6587A2AE71824160F0", 00:26:09.692 "uuid": "8025ea90-bd2f-4b65-87a2-ae71824160f0", 00:26:09.692 "no_auto_visible": false 00:26:09.692 } 00:26:09.692 } 00:26:09.692 }, 00:26:09.692 { 00:26:09.692 "method": "nvmf_subsystem_add_listener", 00:26:09.692 "params": { 00:26:09.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.692 "listen_address": { 00:26:09.692 "trtype": "TCP", 00:26:09.692 "adrfam": "IPv4", 00:26:09.692 "traddr": "10.0.0.2", 00:26:09.692 "trsvcid": "4420" 00:26:09.692 }, 00:26:09.692 "secure_channel": true 00:26:09.692 } 00:26:09.692 } 00:26:09.692 ] 00:26:09.692 } 00:26:09.692 ] 00:26:09.692 }' 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=445680 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 445680 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 445680 ']' 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.692 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.693 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.693 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.693 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.693 [2024-12-10 00:07:54.103635] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:09.693 [2024-12-10 00:07:54.103684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.952 [2024-12-10 00:07:54.195370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.952 [2024-12-10 00:07:54.232930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.952 [2024-12-10 00:07:54.232965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.952 [2024-12-10 00:07:54.232974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.952 [2024-12-10 00:07:54.232982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.952 [2024-12-10 00:07:54.232990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.952 [2024-12-10 00:07:54.233579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.218 [2024-12-10 00:07:54.446107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.218 [2024-12-10 00:07:54.478125] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:10.218 [2024-12-10 00:07:54.478332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.478 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.478 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:10.478 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:10.478 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:10.478 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=445839 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 445839 /var/tmp/bdevperf.sock 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 445839 ']' 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.737 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:26:10.738 "subsystems": [ 00:26:10.738 { 00:26:10.738 "subsystem": "keyring", 00:26:10.738 "config": [ 00:26:10.738 { 00:26:10.738 "method": "keyring_file_add_key", 00:26:10.738 "params": { 00:26:10.738 "name": "key0", 00:26:10.738 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:10.738 } 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "iobuf", 00:26:10.738 "config": [ 00:26:10.738 { 00:26:10.738 "method": "iobuf_set_options", 00:26:10.738 "params": { 00:26:10.738 "small_pool_count": 8192, 00:26:10.738 "large_pool_count": 1024, 00:26:10.738 "small_bufsize": 8192, 00:26:10.738 "large_bufsize": 135168, 00:26:10.738 "enable_numa": false 00:26:10.738 } 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "sock", 00:26:10.738 "config": [ 00:26:10.738 { 00:26:10.738 "method": "sock_set_default_impl", 00:26:10.738 "params": { 00:26:10.738 "impl_name": "posix" 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "sock_impl_set_options", 00:26:10.738 "params": { 00:26:10.738 "impl_name": "ssl", 00:26:10.738 "recv_buf_size": 4096, 00:26:10.738 "send_buf_size": 4096, 00:26:10.738 "enable_recv_pipe": true, 00:26:10.738 "enable_quickack": false, 00:26:10.738 "enable_placement_id": 0, 00:26:10.738 "enable_zerocopy_send_server": true, 00:26:10.738 "enable_zerocopy_send_client": false, 00:26:10.738 "zerocopy_threshold": 0, 00:26:10.738 "tls_version": 0, 00:26:10.738 "enable_ktls": false 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "sock_impl_set_options", 00:26:10.738 "params": { 00:26:10.738 "impl_name": "posix", 00:26:10.738 "recv_buf_size": 2097152, 00:26:10.738 "send_buf_size": 2097152, 00:26:10.738 "enable_recv_pipe": true, 00:26:10.738 "enable_quickack": false, 00:26:10.738 "enable_placement_id": 0, 00:26:10.738 "enable_zerocopy_send_server": true, 00:26:10.738 "enable_zerocopy_send_client": false, 00:26:10.738 "zerocopy_threshold": 0, 00:26:10.738 "tls_version": 0, 00:26:10.738 "enable_ktls": false 00:26:10.738 } 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "vmd", 00:26:10.738 "config": [] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "accel", 00:26:10.738 "config": [ 00:26:10.738 { 00:26:10.738 "method": "accel_set_options", 00:26:10.738 "params": { 00:26:10.738 "small_cache_size": 128, 00:26:10.738 "large_cache_size": 16, 00:26:10.738 "task_count": 2048, 00:26:10.738 "sequence_count": 2048, 00:26:10.738 "buf_count": 2048 00:26:10.738 } 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "bdev", 00:26:10.738 "config": [ 00:26:10.738 { 00:26:10.738 "method": "bdev_set_options", 00:26:10.738 "params": { 00:26:10.738 "bdev_io_pool_size": 65535, 00:26:10.738 "bdev_io_cache_size": 256, 00:26:10.738 "bdev_auto_examine": true, 00:26:10.738 "iobuf_small_cache_size": 128, 00:26:10.738 "iobuf_large_cache_size": 16 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_raid_set_options", 00:26:10.738 "params": { 00:26:10.738 "process_window_size_kb": 1024, 00:26:10.738 "process_max_bandwidth_mb_sec": 0 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_iscsi_set_options", 00:26:10.738 "params": { 00:26:10.738 "timeout_sec": 30 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_nvme_set_options", 00:26:10.738 "params": { 00:26:10.738 "action_on_timeout": "none", 00:26:10.738 "timeout_us": 0, 00:26:10.738 "timeout_admin_us": 0, 00:26:10.738 "keep_alive_timeout_ms": 10000, 00:26:10.738 "arbitration_burst": 0, 00:26:10.738 "low_priority_weight": 0, 00:26:10.738 "medium_priority_weight": 0, 00:26:10.738 "high_priority_weight": 0, 00:26:10.738 "nvme_adminq_poll_period_us": 10000, 00:26:10.738 "nvme_ioq_poll_period_us": 0, 00:26:10.738 "io_queue_requests": 512, 00:26:10.738 "delay_cmd_submit": true, 00:26:10.738 "transport_retry_count": 4, 00:26:10.738 "bdev_retry_count": 3, 00:26:10.738 "transport_ack_timeout": 0, 00:26:10.738 "ctrlr_loss_timeout_sec": 0, 00:26:10.738 "reconnect_delay_sec": 0, 00:26:10.738 "fast_io_fail_timeout_sec": 0, 00:26:10.738 "disable_auto_failback": false, 00:26:10.738 "generate_uuids": false, 00:26:10.738 "transport_tos": 0, 00:26:10.738 "nvme_error_stat": false, 00:26:10.738 "rdma_srq_size": 0, 00:26:10.738 "io_path_stat": false, 00:26:10.738 "allow_accel_sequence": false, 00:26:10.738 "rdma_max_cq_size": 0, 00:26:10.738 "rdma_cm_event_timeout_ms": 0, 00:26:10.738 "dhchap_digests": [ 00:26:10.738 "sha256", 00:26:10.738 "sha384", 00:26:10.738 "sha512" 00:26:10.738 ], 00:26:10.738 "dhchap_dhgroups": [ 00:26:10.738 "null", 00:26:10.738 "ffdhe2048", 00:26:10.738 "ffdhe3072", 00:26:10.738 "ffdhe4096", 00:26:10.738 "ffdhe6144", 00:26:10.738 "ffdhe8192" 00:26:10.738 ] 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_nvme_attach_controller", 00:26:10.738 "params": { 00:26:10.738 "name": "TLSTEST", 00:26:10.738 "trtype": "TCP", 00:26:10.738 "adrfam": "IPv4", 00:26:10.738 "traddr": "10.0.0.2", 00:26:10.738 "trsvcid": "4420", 00:26:10.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.738 "prchk_reftag": false, 00:26:10.738 "prchk_guard": false, 00:26:10.738 "ctrlr_loss_timeout_sec": 0, 00:26:10.738 "reconnect_delay_sec": 0, 00:26:10.738 "fast_io_fail_timeout_sec": 0, 00:26:10.738 "psk": "key0", 00:26:10.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:10.738 "hdgst": false, 00:26:10.738 "ddgst": false, 00:26:10.738 "multipath": "multipath" 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_nvme_set_hotplug", 00:26:10.738 "params": { 00:26:10.738 "period_us": 100000, 00:26:10.738 "enable": false 00:26:10.738 } 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "method": "bdev_wait_for_examine" 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }, 00:26:10.738 { 00:26:10.738 "subsystem": "nbd", 00:26:10.738 "config": [] 00:26:10.738 } 00:26:10.738 ] 00:26:10.738 }' 00:26:10.738 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:10.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:10.738 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.738 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:10.738 [2024-12-10 00:07:55.028420] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:10.738 [2024-12-10 00:07:55.028470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445839 ] 00:26:10.738 [2024-12-10 00:07:55.116609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.738 [2024-12-10 00:07:55.156497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.997 [2024-12-10 00:07:55.310696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:11.564 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.564 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:11.564 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:11.564 Running I/O for 10 seconds... 00:26:13.872 5347.00 IOPS, 20.89 MiB/s [2024-12-09T23:07:59.279Z] 5398.00 IOPS, 21.09 MiB/s [2024-12-09T23:08:00.214Z] 5296.33 IOPS, 20.69 MiB/s [2024-12-09T23:08:01.148Z] 5281.50 IOPS, 20.63 MiB/s [2024-12-09T23:08:02.083Z] 5292.20 IOPS, 20.67 MiB/s [2024-12-09T23:08:03.017Z] 5326.83 IOPS, 20.81 MiB/s [2024-12-09T23:08:04.392Z] 5098.14 IOPS, 19.91 MiB/s [2024-12-09T23:08:05.325Z] 5156.25 IOPS, 20.14 MiB/s [2024-12-09T23:08:06.258Z] 5193.56 IOPS, 20.29 MiB/s [2024-12-09T23:08:06.258Z] 5222.00 IOPS, 20.40 MiB/s 00:26:21.785 Latency(us) 00:26:21.785 [2024-12-09T23:08:06.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.785 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:21.785 Verification LBA range: start 0x0 length 0x2000 00:26:21.785 TLSTESTn1 : 10.01 5227.27 20.42 0.00 0.00 24453.47 5452.60 29150.41 00:26:21.785 [2024-12-09T23:08:06.258Z] =================================================================================================================== 00:26:21.785 [2024-12-09T23:08:06.258Z] Total : 5227.27 20.42 0.00 0.00 24453.47 5452.60 29150.41 00:26:21.785 { 00:26:21.785 "results": [ 00:26:21.785 { 00:26:21.785 "job": "TLSTESTn1", 00:26:21.785 "core_mask": "0x4", 00:26:21.785 "workload": "verify", 00:26:21.785 "status": "finished", 00:26:21.785 "verify_range": { 00:26:21.785 "start": 0, 00:26:21.785 "length": 8192 00:26:21.785 }, 00:26:21.785 "queue_depth": 128, 00:26:21.785 "io_size": 4096, 00:26:21.785 "runtime": 10.014024, 00:26:21.785 "iops": 5227.269277565143, 00:26:21.785 "mibps": 20.41902061548884, 00:26:21.785 "io_failed": 0, 00:26:21.785 "io_timeout": 0, 00:26:21.785 "avg_latency_us": 24453.47124221144, 00:26:21.785 "min_latency_us": 5452.5952, 00:26:21.785 "max_latency_us": 29150.4128 00:26:21.785 } 00:26:21.785 ], 00:26:21.785 "core_count": 1 00:26:21.785 } 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 445839 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 445839 ']' 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 445839 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445839 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445839' 00:26:21.785 killing process with pid 445839 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 445839 00:26:21.785 Received shutdown signal, test time was about 10.000000 seconds 00:26:21.785 00:26:21.785 Latency(us) 00:26:21.785 [2024-12-09T23:08:06.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.785 [2024-12-09T23:08:06.258Z] =================================================================================================================== 00:26:21.785 [2024-12-09T23:08:06.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 445839 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 445680 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 445680 ']' 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 445680 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:21.785 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445680 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445680' 00:26:22.043 killing process with pid 445680 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 445680 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 445680 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=447785 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 447785 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 447785 ']' 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.043 00:08:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:22.302 [2024-12-10 00:08:06.546576] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:22.302 [2024-12-10 00:08:06.546622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.302 [2024-12-10 00:08:06.639322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.302 [2024-12-10 00:08:06.678590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.302 [2024-12-10 00:08:06.678627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.302 [2024-12-10 00:08:06.678637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.302 [2024-12-10 00:08:06.678645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.302 [2024-12-10 00:08:06.678652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.302 [2024-12-10 00:08:06.679215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lTb0vi6UrS 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lTb0vi6UrS 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:23.236 [2024-12-10 00:08:07.588954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.236 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:23.494 00:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:23.752 [2024-12-10 00:08:08.002024] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.752 [2024-12-10 00:08:08.002232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.752 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:23.752 malloc0 00:26:24.010 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:24.010 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:24.269 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=448114 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 448114 /var/tmp/bdevperf.sock 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 448114 ']' 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.527 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.527 [2024-12-10 00:08:08.845994] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:24.527 [2024-12-10 00:08:08.846044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448114 ] 00:26:24.527 [2024-12-10 00:08:08.938094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.527 [2024-12-10 00:08:08.976669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.785 00:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.785 00:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:24.785 00:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:25.043 00:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:25.043 [2024-12-10 00:08:09.429122] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:25.043 nvme0n1 00:26:25.301 00:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:25.301 Running I/O for 1 seconds... 00:26:26.236 5217.00 IOPS, 20.38 MiB/s 00:26:26.236 Latency(us) 00:26:26.236 [2024-12-09T23:08:10.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.236 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:26.236 Verification LBA range: start 0x0 length 0x2000 00:26:26.236 nvme0n1 : 1.01 5270.17 20.59 0.00 0.00 24116.55 4954.52 21495.81 00:26:26.236 [2024-12-09T23:08:10.709Z] =================================================================================================================== 00:26:26.236 [2024-12-09T23:08:10.709Z] Total : 5270.17 20.59 0.00 0.00 24116.55 4954.52 21495.81 00:26:26.236 { 00:26:26.236 "results": [ 00:26:26.236 { 00:26:26.236 "job": "nvme0n1", 00:26:26.236 "core_mask": "0x2", 00:26:26.236 "workload": "verify", 00:26:26.236 "status": "finished", 00:26:26.236 "verify_range": { 00:26:26.236 "start": 0, 00:26:26.236 "length": 8192 00:26:26.236 }, 00:26:26.236 "queue_depth": 128, 00:26:26.236 "io_size": 4096, 00:26:26.236 "runtime": 1.014389, 00:26:26.236 "iops": 5270.16755899364, 00:26:26.236 "mibps": 20.586592027318908, 00:26:26.236 "io_failed": 0, 00:26:26.236 "io_timeout": 0, 00:26:26.236 "avg_latency_us": 24116.54924354658, 00:26:26.236 "min_latency_us": 4954.5216, 00:26:26.236 "max_latency_us": 21495.808 00:26:26.236 } 00:26:26.236 ], 00:26:26.236 "core_count": 1 00:26:26.236 } 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 448114 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 448114 ']' 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 448114 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.236 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448114 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448114' 00:26:26.496 killing process with pid 448114 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 448114 00:26:26.496 Received shutdown signal, test time was about 1.000000 seconds 00:26:26.496 00:26:26.496 Latency(us) 00:26:26.496 [2024-12-09T23:08:10.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.496 [2024-12-09T23:08:10.969Z] =================================================================================================================== 00:26:26.496 [2024-12-09T23:08:10.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 448114 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 447785 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 447785 ']' 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 447785 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447785 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447785' 00:26:26.496 killing process with pid 447785 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 447785 00:26:26.496 00:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 447785 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=448569 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 448569 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 448569 ']' 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.755 00:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:26.755 [2024-12-10 00:08:11.192374] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:26.755 [2024-12-10 00:08:11.192429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.014 [2024-12-10 00:08:11.285657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.014 [2024-12-10 00:08:11.321689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.014 [2024-12-10 00:08:11.321725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.014 [2024-12-10 00:08:11.321735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.014 [2024-12-10 00:08:11.321743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.014 [2024-12-10 00:08:11.321765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.014 [2024-12-10 00:08:11.322350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.582 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.582 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:27.582 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.582 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.582 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 [2024-12-10 00:08:12.070992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.841 malloc0 00:26:27.841 [2024-12-10 00:08:12.099270] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:27.841 [2024-12-10 00:08:12.099484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=448672 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 448672 /var/tmp/bdevperf.sock 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 448672 ']' 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.841 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 [2024-12-10 00:08:12.175596] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:27.841 [2024-12-10 00:08:12.175639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448672 ] 00:26:27.841 [2024-12-10 00:08:12.264513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.841 [2024-12-10 00:08:12.303009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.777 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.777 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:28.777 00:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lTb0vi6UrS 00:26:28.777 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:29.035 [2024-12-10 00:08:13.321549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.035 nvme0n1 00:26:29.035 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:29.035 Running I/O for 1 seconds... 00:26:30.412 5291.00 IOPS, 20.67 MiB/s 00:26:30.412 Latency(us) 00:26:30.412 [2024-12-09T23:08:14.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:30.412 Verification LBA range: start 0x0 length 0x2000 00:26:30.412 nvme0n1 : 1.02 5330.12 20.82 0.00 0.00 23809.69 7077.89 28101.84 00:26:30.412 [2024-12-09T23:08:14.885Z] =================================================================================================================== 00:26:30.412 [2024-12-09T23:08:14.885Z] Total : 5330.12 20.82 0.00 0.00 23809.69 7077.89 28101.84 00:26:30.412 { 00:26:30.412 "results": [ 00:26:30.412 { 00:26:30.412 "job": "nvme0n1", 00:26:30.412 "core_mask": "0x2", 00:26:30.412 "workload": "verify", 00:26:30.412 "status": "finished", 00:26:30.412 "verify_range": { 00:26:30.412 "start": 0, 00:26:30.412 "length": 8192 00:26:30.412 }, 00:26:30.412 "queue_depth": 128, 00:26:30.412 "io_size": 4096, 00:26:30.412 "runtime": 1.016675, 00:26:30.412 "iops": 5330.1202449160255, 00:26:30.412 "mibps": 20.820782206703225, 00:26:30.412 "io_failed": 0, 00:26:30.412 "io_timeout": 0, 00:26:30.412 "avg_latency_us": 23809.69440915298, 00:26:30.412 "min_latency_us": 7077.888, 00:26:30.412 "max_latency_us": 28101.8368 00:26:30.412 } 00:26:30.412 ], 00:26:30.412 "core_count": 1 00:26:30.412 } 00:26:30.412 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:26:30.412 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.412 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:30.412 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.412 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:26:30.412 "subsystems": [ 00:26:30.412 { 00:26:30.412 "subsystem": "keyring", 00:26:30.412 "config": [ 00:26:30.412 { 00:26:30.412 "method": "keyring_file_add_key", 00:26:30.412 "params": { 00:26:30.412 "name": "key0", 00:26:30.412 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:30.412 } 00:26:30.412 } 00:26:30.412 ] 00:26:30.412 }, 00:26:30.412 { 00:26:30.412 "subsystem": "iobuf", 00:26:30.412 "config": [ 00:26:30.412 { 00:26:30.412 "method": "iobuf_set_options", 00:26:30.412 "params": { 00:26:30.412 "small_pool_count": 8192, 00:26:30.412 "large_pool_count": 1024, 00:26:30.412 "small_bufsize": 8192, 00:26:30.412 "large_bufsize": 135168, 00:26:30.412 "enable_numa": false 00:26:30.412 } 00:26:30.412 } 00:26:30.412 ] 00:26:30.412 }, 00:26:30.412 { 00:26:30.412 "subsystem": "sock", 00:26:30.412 "config": [ 00:26:30.412 { 00:26:30.412 "method": "sock_set_default_impl", 00:26:30.412 "params": { 00:26:30.412 "impl_name": "posix" 00:26:30.412 } 00:26:30.412 }, 00:26:30.412 { 00:26:30.412 "method": "sock_impl_set_options", 00:26:30.412 "params": { 00:26:30.412 "impl_name": "ssl", 00:26:30.412 "recv_buf_size": 4096, 00:26:30.413 "send_buf_size": 4096, 00:26:30.413 "enable_recv_pipe": true, 00:26:30.413 "enable_quickack": false, 00:26:30.413 "enable_placement_id": 0, 00:26:30.413 "enable_zerocopy_send_server": true, 00:26:30.413 "enable_zerocopy_send_client": false, 00:26:30.413 "zerocopy_threshold": 0, 00:26:30.413 "tls_version": 0, 00:26:30.413 "enable_ktls": false 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "sock_impl_set_options", 00:26:30.413 "params": { 00:26:30.413 "impl_name": "posix", 00:26:30.413 "recv_buf_size": 2097152, 00:26:30.413 "send_buf_size": 2097152, 00:26:30.413 "enable_recv_pipe": true, 00:26:30.413 "enable_quickack": false, 00:26:30.413 "enable_placement_id": 0, 00:26:30.413 "enable_zerocopy_send_server": true, 00:26:30.413 "enable_zerocopy_send_client": false, 00:26:30.413 "zerocopy_threshold": 0, 00:26:30.413 "tls_version": 0, 00:26:30.413 "enable_ktls": false 00:26:30.413 } 00:26:30.413 } 00:26:30.413 ] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "vmd", 00:26:30.413 "config": [] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "accel", 00:26:30.413 "config": [ 00:26:30.413 { 00:26:30.413 "method": "accel_set_options", 00:26:30.413 "params": { 00:26:30.413 "small_cache_size": 128, 00:26:30.413 "large_cache_size": 16, 00:26:30.413 "task_count": 2048, 00:26:30.413 "sequence_count": 2048, 00:26:30.413 "buf_count": 2048 00:26:30.413 } 00:26:30.413 } 00:26:30.413 ] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "bdev", 00:26:30.413 "config": [ 00:26:30.413 { 00:26:30.413 "method": "bdev_set_options", 00:26:30.413 "params": { 00:26:30.413 "bdev_io_pool_size": 65535, 00:26:30.413 "bdev_io_cache_size": 256, 00:26:30.413 "bdev_auto_examine": true, 00:26:30.413 "iobuf_small_cache_size": 128, 00:26:30.413 "iobuf_large_cache_size": 16 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_raid_set_options", 00:26:30.413 "params": { 00:26:30.413 "process_window_size_kb": 1024, 00:26:30.413 "process_max_bandwidth_mb_sec": 0 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_iscsi_set_options", 00:26:30.413 "params": { 00:26:30.413 "timeout_sec": 30 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_nvme_set_options", 00:26:30.413 "params": { 00:26:30.413 "action_on_timeout": "none", 00:26:30.413 "timeout_us": 0, 00:26:30.413 "timeout_admin_us": 0, 00:26:30.413 "keep_alive_timeout_ms": 10000, 00:26:30.413 "arbitration_burst": 0, 00:26:30.413 "low_priority_weight": 0, 00:26:30.413 "medium_priority_weight": 0, 00:26:30.413 "high_priority_weight": 0, 00:26:30.413 "nvme_adminq_poll_period_us": 10000, 00:26:30.413 "nvme_ioq_poll_period_us": 0, 00:26:30.413 "io_queue_requests": 0, 00:26:30.413 "delay_cmd_submit": true, 00:26:30.413 "transport_retry_count": 4, 00:26:30.413 "bdev_retry_count": 3, 00:26:30.413 "transport_ack_timeout": 0, 00:26:30.413 "ctrlr_loss_timeout_sec": 0, 00:26:30.413 "reconnect_delay_sec": 0, 00:26:30.413 "fast_io_fail_timeout_sec": 0, 00:26:30.413 "disable_auto_failback": false, 00:26:30.413 "generate_uuids": false, 00:26:30.413 "transport_tos": 0, 00:26:30.413 "nvme_error_stat": false, 00:26:30.413 "rdma_srq_size": 0, 00:26:30.413 "io_path_stat": false, 00:26:30.413 "allow_accel_sequence": false, 00:26:30.413 "rdma_max_cq_size": 0, 00:26:30.413 "rdma_cm_event_timeout_ms": 0, 00:26:30.413 "dhchap_digests": [ 00:26:30.413 "sha256", 00:26:30.413 "sha384", 00:26:30.413 "sha512" 00:26:30.413 ], 00:26:30.413 "dhchap_dhgroups": [ 00:26:30.413 "null", 00:26:30.413 "ffdhe2048", 00:26:30.413 "ffdhe3072", 00:26:30.413 "ffdhe4096", 00:26:30.413 "ffdhe6144", 00:26:30.413 "ffdhe8192" 00:26:30.413 ] 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_nvme_set_hotplug", 00:26:30.413 "params": { 00:26:30.413 "period_us": 100000, 00:26:30.413 "enable": false 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_malloc_create", 00:26:30.413 "params": { 00:26:30.413 "name": "malloc0", 00:26:30.413 "num_blocks": 8192, 00:26:30.413 "block_size": 4096, 00:26:30.413 "physical_block_size": 4096, 00:26:30.413 "uuid": "cda88d1a-1b5d-4a63-a8c8-94d33c665eae", 00:26:30.413 "optimal_io_boundary": 0, 00:26:30.413 "md_size": 0, 00:26:30.413 "dif_type": 0, 00:26:30.413 "dif_is_head_of_md": false, 00:26:30.413 "dif_pi_format": 0 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "bdev_wait_for_examine" 00:26:30.413 } 00:26:30.413 ] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "nbd", 00:26:30.413 "config": [] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "scheduler", 00:26:30.413 "config": [ 00:26:30.413 { 00:26:30.413 "method": "framework_set_scheduler", 00:26:30.413 "params": { 00:26:30.413 "name": "static" 00:26:30.413 } 00:26:30.413 } 00:26:30.413 ] 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "subsystem": "nvmf", 00:26:30.413 "config": [ 00:26:30.413 { 00:26:30.413 "method": "nvmf_set_config", 00:26:30.413 "params": { 00:26:30.413 "discovery_filter": "match_any", 00:26:30.413 "admin_cmd_passthru": { 00:26:30.413 "identify_ctrlr": false 00:26:30.413 }, 00:26:30.413 "dhchap_digests": [ 00:26:30.413 "sha256", 00:26:30.413 "sha384", 00:26:30.413 "sha512" 00:26:30.413 ], 00:26:30.413 "dhchap_dhgroups": [ 00:26:30.413 "null", 00:26:30.413 "ffdhe2048", 00:26:30.413 "ffdhe3072", 00:26:30.413 "ffdhe4096", 00:26:30.413 "ffdhe6144", 00:26:30.413 "ffdhe8192" 00:26:30.413 ] 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "nvmf_set_max_subsystems", 00:26:30.413 "params": { 00:26:30.413 "max_subsystems": 1024 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "nvmf_set_crdt", 00:26:30.413 "params": { 00:26:30.413 "crdt1": 0, 00:26:30.413 "crdt2": 0, 00:26:30.413 "crdt3": 0 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "nvmf_create_transport", 00:26:30.413 "params": { 00:26:30.413 "trtype": "TCP", 00:26:30.413 "max_queue_depth": 128, 00:26:30.413 "max_io_qpairs_per_ctrlr": 127, 00:26:30.413 "in_capsule_data_size": 4096, 00:26:30.413 "max_io_size": 131072, 00:26:30.413 "io_unit_size": 131072, 00:26:30.413 "max_aq_depth": 128, 00:26:30.413 "num_shared_buffers": 511, 00:26:30.413 "buf_cache_size": 4294967295, 00:26:30.413 "dif_insert_or_strip": false, 00:26:30.413 "zcopy": false, 00:26:30.413 "c2h_success": false, 00:26:30.413 "sock_priority": 0, 00:26:30.413 "abort_timeout_sec": 1, 00:26:30.413 "ack_timeout": 0, 00:26:30.413 "data_wr_pool_size": 0 00:26:30.413 } 00:26:30.413 }, 00:26:30.413 { 00:26:30.413 "method": "nvmf_create_subsystem", 00:26:30.413 "params": { 00:26:30.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.413 "allow_any_host": false, 00:26:30.413 "serial_number": "00000000000000000000", 00:26:30.414 "model_number": "SPDK bdev Controller", 00:26:30.414 "max_namespaces": 32, 00:26:30.414 "min_cntlid": 1, 00:26:30.414 "max_cntlid": 65519, 00:26:30.414 "ana_reporting": false 00:26:30.414 } 00:26:30.414 }, 00:26:30.414 { 00:26:30.414 "method": "nvmf_subsystem_add_host", 00:26:30.414 "params": { 00:26:30.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.414 "host": "nqn.2016-06.io.spdk:host1", 00:26:30.414 "psk": "key0" 00:26:30.414 } 00:26:30.414 }, 00:26:30.414 { 00:26:30.414 "method": "nvmf_subsystem_add_ns", 00:26:30.414 "params": { 00:26:30.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.414 "namespace": { 00:26:30.414 "nsid": 1, 00:26:30.414 "bdev_name": "malloc0", 00:26:30.414 "nguid": "CDA88D1A1B5D4A63A8C894D33C665EAE", 00:26:30.414 "uuid": "cda88d1a-1b5d-4a63-a8c8-94d33c665eae", 00:26:30.414 "no_auto_visible": false 00:26:30.414 } 00:26:30.414 } 00:26:30.414 }, 00:26:30.414 { 00:26:30.414 "method": "nvmf_subsystem_add_listener", 00:26:30.414 "params": { 00:26:30.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.414 "listen_address": { 00:26:30.414 "trtype": "TCP", 00:26:30.414 "adrfam": "IPv4", 00:26:30.414 "traddr": "10.0.0.2", 00:26:30.414 "trsvcid": "4420" 00:26:30.414 }, 00:26:30.414 "secure_channel": false, 00:26:30.414 "sock_impl": "ssl" 00:26:30.414 } 00:26:30.414 } 00:26:30.414 ] 00:26:30.414 } 00:26:30.414 ] 00:26:30.414 }' 00:26:30.414 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:30.674 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:26:30.674 "subsystems": [ 00:26:30.674 { 00:26:30.674 "subsystem": "keyring", 00:26:30.674 "config": [ 00:26:30.674 { 00:26:30.674 "method": "keyring_file_add_key", 00:26:30.674 "params": { 00:26:30.674 "name": "key0", 00:26:30.674 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:30.674 } 00:26:30.674 } 00:26:30.674 ] 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "subsystem": "iobuf", 00:26:30.674 "config": [ 00:26:30.674 { 00:26:30.674 "method": "iobuf_set_options", 00:26:30.674 "params": { 00:26:30.674 "small_pool_count": 8192, 00:26:30.674 "large_pool_count": 1024, 00:26:30.674 "small_bufsize": 8192, 00:26:30.674 "large_bufsize": 135168, 00:26:30.674 "enable_numa": false 00:26:30.674 } 00:26:30.674 } 00:26:30.674 ] 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "subsystem": "sock", 00:26:30.674 "config": [ 00:26:30.674 { 00:26:30.674 "method": "sock_set_default_impl", 00:26:30.674 "params": { 00:26:30.674 "impl_name": "posix" 00:26:30.674 } 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "method": "sock_impl_set_options", 00:26:30.674 "params": { 00:26:30.674 "impl_name": "ssl", 00:26:30.674 "recv_buf_size": 4096, 00:26:30.674 "send_buf_size": 4096, 00:26:30.674 "enable_recv_pipe": true, 00:26:30.674 "enable_quickack": false, 00:26:30.674 "enable_placement_id": 0, 00:26:30.674 "enable_zerocopy_send_server": true, 00:26:30.674 "enable_zerocopy_send_client": false, 00:26:30.674 "zerocopy_threshold": 0, 00:26:30.674 "tls_version": 0, 00:26:30.674 "enable_ktls": false 00:26:30.674 } 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "method": "sock_impl_set_options", 00:26:30.674 "params": { 00:26:30.674 "impl_name": "posix", 00:26:30.674 "recv_buf_size": 2097152, 00:26:30.674 "send_buf_size": 2097152, 00:26:30.674 "enable_recv_pipe": true, 00:26:30.674 "enable_quickack": false, 00:26:30.674 "enable_placement_id": 0, 00:26:30.674 "enable_zerocopy_send_server": true, 00:26:30.674 "enable_zerocopy_send_client": false, 00:26:30.674 "zerocopy_threshold": 0, 00:26:30.674 "tls_version": 0, 00:26:30.674 "enable_ktls": false 00:26:30.674 } 00:26:30.674 } 00:26:30.674 ] 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "subsystem": "vmd", 00:26:30.674 "config": [] 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "subsystem": "accel", 00:26:30.674 "config": [ 00:26:30.674 { 00:26:30.674 "method": "accel_set_options", 00:26:30.674 "params": { 00:26:30.674 "small_cache_size": 128, 00:26:30.674 "large_cache_size": 16, 00:26:30.674 "task_count": 2048, 00:26:30.674 "sequence_count": 2048, 00:26:30.674 "buf_count": 2048 00:26:30.674 } 00:26:30.674 } 00:26:30.674 ] 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "subsystem": "bdev", 00:26:30.674 "config": [ 00:26:30.674 { 00:26:30.674 "method": "bdev_set_options", 00:26:30.674 "params": { 00:26:30.674 "bdev_io_pool_size": 65535, 00:26:30.674 "bdev_io_cache_size": 256, 00:26:30.674 "bdev_auto_examine": true, 00:26:30.674 "iobuf_small_cache_size": 128, 00:26:30.674 "iobuf_large_cache_size": 16 00:26:30.674 } 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "method": "bdev_raid_set_options", 00:26:30.674 "params": { 00:26:30.674 "process_window_size_kb": 1024, 00:26:30.674 "process_max_bandwidth_mb_sec": 0 00:26:30.674 } 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "method": "bdev_iscsi_set_options", 00:26:30.674 "params": { 00:26:30.674 "timeout_sec": 30 00:26:30.674 } 00:26:30.674 }, 00:26:30.674 { 00:26:30.674 "method": "bdev_nvme_set_options", 00:26:30.674 "params": { 00:26:30.674 "action_on_timeout": "none", 00:26:30.674 "timeout_us": 0, 00:26:30.674 "timeout_admin_us": 0, 00:26:30.674 "keep_alive_timeout_ms": 10000, 00:26:30.674 "arbitration_burst": 0, 00:26:30.674 "low_priority_weight": 0, 00:26:30.674 "medium_priority_weight": 0, 00:26:30.674 "high_priority_weight": 0, 00:26:30.674 "nvme_adminq_poll_period_us": 10000, 00:26:30.674 "nvme_ioq_poll_period_us": 0, 00:26:30.674 "io_queue_requests": 512, 00:26:30.674 "delay_cmd_submit": true, 00:26:30.674 "transport_retry_count": 4, 00:26:30.674 "bdev_retry_count": 3, 00:26:30.674 "transport_ack_timeout": 0, 00:26:30.674 "ctrlr_loss_timeout_sec": 0, 00:26:30.674 "reconnect_delay_sec": 0, 00:26:30.674 "fast_io_fail_timeout_sec": 0, 00:26:30.674 "disable_auto_failback": false, 00:26:30.674 "generate_uuids": false, 00:26:30.674 "transport_tos": 0, 00:26:30.674 "nvme_error_stat": false, 00:26:30.674 "rdma_srq_size": 0, 00:26:30.674 "io_path_stat": false, 00:26:30.674 "allow_accel_sequence": false, 00:26:30.674 "rdma_max_cq_size": 0, 00:26:30.674 "rdma_cm_event_timeout_ms": 0, 00:26:30.675 "dhchap_digests": [ 00:26:30.675 "sha256", 00:26:30.675 "sha384", 00:26:30.675 "sha512" 00:26:30.675 ], 00:26:30.675 "dhchap_dhgroups": [ 00:26:30.675 "null", 00:26:30.675 "ffdhe2048", 00:26:30.675 "ffdhe3072", 00:26:30.675 "ffdhe4096", 00:26:30.675 "ffdhe6144", 00:26:30.675 "ffdhe8192" 00:26:30.675 ] 00:26:30.675 } 00:26:30.675 }, 00:26:30.675 { 00:26:30.675 "method": "bdev_nvme_attach_controller", 00:26:30.675 "params": { 00:26:30.675 "name": "nvme0", 00:26:30.675 "trtype": "TCP", 00:26:30.675 "adrfam": "IPv4", 00:26:30.675 "traddr": "10.0.0.2", 00:26:30.675 "trsvcid": "4420", 00:26:30.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.675 "prchk_reftag": false, 00:26:30.675 "prchk_guard": false, 00:26:30.675 "ctrlr_loss_timeout_sec": 0, 00:26:30.675 "reconnect_delay_sec": 0, 00:26:30.675 "fast_io_fail_timeout_sec": 0, 00:26:30.675 "psk": "key0", 00:26:30.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.675 "hdgst": false, 00:26:30.675 "ddgst": false, 00:26:30.675 "multipath": "multipath" 00:26:30.675 } 00:26:30.675 }, 00:26:30.675 { 00:26:30.675 "method": "bdev_nvme_set_hotplug", 00:26:30.675 "params": { 00:26:30.675 "period_us": 100000, 00:26:30.675 "enable": false 00:26:30.675 } 00:26:30.675 }, 00:26:30.675 { 00:26:30.675 "method": "bdev_enable_histogram", 00:26:30.675 "params": { 00:26:30.675 "name": "nvme0n1", 00:26:30.675 "enable": true 00:26:30.675 } 00:26:30.675 }, 00:26:30.675 { 00:26:30.675 "method": "bdev_wait_for_examine" 00:26:30.675 } 00:26:30.675 ] 00:26:30.675 }, 00:26:30.675 { 00:26:30.675 "subsystem": "nbd", 00:26:30.675 "config": [] 00:26:30.675 } 00:26:30.675 ] 00:26:30.675 }' 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 448672 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 448672 ']' 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 448672 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448672 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448672' 00:26:30.675 killing process with pid 448672 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 448672 00:26:30.675 Received shutdown signal, test time was about 1.000000 seconds 00:26:30.675 00:26:30.675 Latency(us) 00:26:30.675 [2024-12-09T23:08:15.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.675 [2024-12-09T23:08:15.148Z] =================================================================================================================== 00:26:30.675 [2024-12-09T23:08:15.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.675 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 448672 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 448569 ']' 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448569' 00:26:30.934 killing process with pid 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 448569 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.934 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:26:30.934 "subsystems": [ 00:26:30.934 { 00:26:30.934 "subsystem": "keyring", 00:26:30.934 "config": [ 00:26:30.934 { 00:26:30.934 "method": "keyring_file_add_key", 00:26:30.934 "params": { 00:26:30.934 "name": "key0", 00:26:30.934 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:30.934 } 00:26:30.934 } 00:26:30.934 ] 00:26:30.934 }, 00:26:30.934 { 00:26:30.934 "subsystem": "iobuf", 00:26:30.934 "config": [ 00:26:30.934 { 00:26:30.934 "method": "iobuf_set_options", 00:26:30.934 "params": { 00:26:30.934 "small_pool_count": 8192, 00:26:30.934 "large_pool_count": 1024, 00:26:30.934 "small_bufsize": 8192, 00:26:30.934 "large_bufsize": 135168, 00:26:30.934 "enable_numa": false 00:26:30.934 } 00:26:30.934 } 00:26:30.934 ] 00:26:30.934 }, 00:26:30.934 { 00:26:30.934 "subsystem": "sock", 00:26:30.934 "config": [ 00:26:30.934 { 00:26:30.934 "method": "sock_set_default_impl", 00:26:30.934 "params": { 00:26:30.934 "impl_name": "posix" 00:26:30.934 } 00:26:30.934 }, 00:26:30.934 { 00:26:30.934 "method": "sock_impl_set_options", 00:26:30.934 "params": { 00:26:30.934 "impl_name": "ssl", 00:26:30.934 "recv_buf_size": 4096, 00:26:30.934 "send_buf_size": 4096, 00:26:30.934 "enable_recv_pipe": true, 00:26:30.934 "enable_quickack": false, 00:26:30.934 "enable_placement_id": 0, 00:26:30.934 "enable_zerocopy_send_server": true, 00:26:30.934 "enable_zerocopy_send_client": false, 00:26:30.934 "zerocopy_threshold": 0, 00:26:30.934 "tls_version": 0, 00:26:30.934 "enable_ktls": false 00:26:30.934 } 00:26:30.934 }, 00:26:30.934 { 00:26:30.934 "method": "sock_impl_set_options", 00:26:30.934 "params": { 00:26:30.934 "impl_name": "posix", 00:26:30.934 "recv_buf_size": 2097152, 00:26:30.934 "send_buf_size": 2097152, 00:26:30.934 "enable_recv_pipe": true, 00:26:30.934 "enable_quickack": false, 00:26:30.934 "enable_placement_id": 0, 00:26:30.934 "enable_zerocopy_send_server": true, 00:26:30.934 "enable_zerocopy_send_client": false, 00:26:30.934 "zerocopy_threshold": 0, 00:26:30.934 "tls_version": 0, 00:26:30.934 "enable_ktls": false 00:26:30.934 } 00:26:30.934 } 00:26:30.934 ] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "vmd", 00:26:30.935 "config": [] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "accel", 00:26:30.935 "config": [ 00:26:30.935 { 00:26:30.935 "method": "accel_set_options", 00:26:30.935 "params": { 00:26:30.935 "small_cache_size": 128, 00:26:30.935 "large_cache_size": 16, 00:26:30.935 "task_count": 2048, 00:26:30.935 "sequence_count": 2048, 00:26:30.935 "buf_count": 2048 00:26:30.935 } 00:26:30.935 } 00:26:30.935 ] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "bdev", 00:26:30.935 "config": [ 00:26:30.935 { 00:26:30.935 "method": "bdev_set_options", 00:26:30.935 "params": { 00:26:30.935 "bdev_io_pool_size": 65535, 00:26:30.935 "bdev_io_cache_size": 256, 00:26:30.935 "bdev_auto_examine": true, 00:26:30.935 "iobuf_small_cache_size": 128, 00:26:30.935 "iobuf_large_cache_size": 16 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_raid_set_options", 00:26:30.935 "params": { 00:26:30.935 "process_window_size_kb": 1024, 00:26:30.935 "process_max_bandwidth_mb_sec": 0 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_iscsi_set_options", 00:26:30.935 "params": { 00:26:30.935 "timeout_sec": 30 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_nvme_set_options", 00:26:30.935 "params": { 00:26:30.935 "action_on_timeout": "none", 00:26:30.935 "timeout_us": 0, 00:26:30.935 "timeout_admin_us": 0, 00:26:30.935 "keep_alive_timeout_ms": 10000, 00:26:30.935 "arbitration_burst": 0, 00:26:30.935 "low_priority_weight": 0, 00:26:30.935 "medium_priority_weight": 0, 00:26:30.935 "high_priority_weight": 0, 00:26:30.935 "nvme_adminq_poll_period_us": 10000, 00:26:30.935 "nvme_ioq_poll_period_us": 0, 00:26:30.935 "io_queue_requests": 0, 00:26:30.935 "delay_cmd_submit": true, 00:26:30.935 "transport_retry_count": 4, 00:26:30.935 "bdev_retry_count": 3, 00:26:30.935 "transport_ack_timeout": 0, 00:26:30.935 "ctrlr_loss_timeout_sec": 0, 00:26:30.935 "reconnect_delay_sec": 0, 00:26:30.935 "fast_io_fail_timeout_sec": 0, 00:26:30.935 "disable_auto_failback": false, 00:26:30.935 "generate_uuids": false, 00:26:30.935 "transport_tos": 0, 00:26:30.935 "nvme_error_stat": false, 00:26:30.935 "rdma_srq_size": 0, 00:26:30.935 "io_path_stat": false, 00:26:30.935 "allow_accel_sequence": false, 00:26:30.935 "rdma_max_cq_size": 0, 00:26:30.935 "rdma_cm_event_timeout_ms": 0, 00:26:30.935 "dhchap_digests": [ 00:26:30.935 "sha256", 00:26:30.935 "sha384", 00:26:30.935 "sha512" 00:26:30.935 ], 00:26:30.935 "dhchap_dhgroups": [ 00:26:30.935 "null", 00:26:30.935 "ffdhe2048", 00:26:30.935 "ffdhe3072", 00:26:30.935 "ffdhe4096", 00:26:30.935 "ffdhe6144", 00:26:30.935 "ffdhe8192" 00:26:30.935 ] 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_nvme_set_hotplug", 00:26:30.935 "params": { 00:26:30.935 "period_us": 100000, 00:26:30.935 "enable": false 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_malloc_create", 00:26:30.935 "params": { 00:26:30.935 "name": "malloc0", 00:26:30.935 "num_blocks": 8192, 00:26:30.935 "block_size": 4096, 00:26:30.935 "physical_block_size": 4096, 00:26:30.935 "uuid": "cda88d1a-1b5d-4a63-a8c8-94d33c665eae", 00:26:30.935 "optimal_io_boundary": 0, 00:26:30.935 "md_size": 0, 00:26:30.935 "dif_type": 0, 00:26:30.935 "dif_is_head_of_md": false, 00:26:30.935 "dif_pi_format": 0 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "bdev_wait_for_examine" 00:26:30.935 } 00:26:30.935 ] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "nbd", 00:26:30.935 "config": [] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "scheduler", 00:26:30.935 "config": [ 00:26:30.935 { 00:26:30.935 "method": "framework_set_scheduler", 00:26:30.935 "params": { 00:26:30.935 "name": "static" 00:26:30.935 } 00:26:30.935 } 00:26:30.935 ] 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "subsystem": "nvmf", 00:26:30.935 "config": [ 00:26:30.935 { 00:26:30.935 "method": "nvmf_set_config", 00:26:30.935 "params": { 00:26:30.935 "discovery_filter": "match_any", 00:26:30.935 "admin_cmd_passthru": { 00:26:30.935 "identify_ctrlr": false 00:26:30.935 }, 00:26:30.935 "dhchap_digests": [ 00:26:30.935 "sha256", 00:26:30.935 "sha384", 00:26:30.935 "sha512" 00:26:30.935 ], 00:26:30.935 "dhchap_dhgroups": [ 00:26:30.935 "null", 00:26:30.935 "ffdhe2048", 00:26:30.935 "ffdhe3072", 00:26:30.935 "ffdhe4096", 00:26:30.935 "ffdhe6144", 00:26:30.935 "ffdhe8192" 00:26:30.935 ] 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_set_max_subsystems", 00:26:30.935 "params": { 00:26:30.935 "max_subsystems": 1024 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_set_crdt", 00:26:30.935 "params": { 00:26:30.935 "crdt1": 0, 00:26:30.935 "crdt2": 0, 00:26:30.935 "crdt3": 0 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_create_transport", 00:26:30.935 "params": { 00:26:30.935 "trtype": "TCP", 00:26:30.935 "max_queue_depth": 128, 00:26:30.935 "max_io_qpairs_per_ctrlr": 127, 00:26:30.935 "in_capsule_data_size": 4096, 00:26:30.935 "max_io_size": 131072, 00:26:30.935 "io_unit_size": 131072, 00:26:30.935 "max_aq_depth": 128, 00:26:30.935 "num_shared_buffers": 511, 00:26:30.935 "buf_cache_size": 4294967295, 00:26:30.935 "dif_insert_or_strip": false, 00:26:30.935 "zcopy": false, 00:26:30.935 "c2h_success": false, 00:26:30.935 "sock_priority": 0, 00:26:30.935 "abort_timeout_sec": 1, 00:26:30.935 "ack_timeout": 0, 00:26:30.935 "data_wr_pool_size": 0 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_create_subsystem", 00:26:30.935 "params": { 00:26:30.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.935 "allow_any_host": false, 00:26:30.935 "serial_number": "00000000000000000000", 00:26:30.935 "model_number": "SPDK bdev Controller", 00:26:30.935 "max_namespaces": 32, 00:26:30.935 "min_cntlid": 1, 00:26:30.935 "max_cntlid": 65519, 00:26:30.935 "ana_reporting": false 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_subsystem_add_host", 00:26:30.935 "params": { 00:26:30.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.935 "host": "nqn.2016-06.io.spdk:host1", 00:26:30.935 "psk": "key0" 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_subsystem_add_ns", 00:26:30.935 "params": { 00:26:30.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.935 "namespace": { 00:26:30.935 "nsid": 1, 00:26:30.935 "bdev_name": "malloc0", 00:26:30.935 "nguid": "CDA88D1A1B5D4A63A8C894D33C665EAE", 00:26:30.935 "uuid": "cda88d1a-1b5d-4a63-a8c8-94d33c665eae", 00:26:30.935 "no_auto_visible": false 00:26:30.935 } 00:26:30.935 } 00:26:30.935 }, 00:26:30.935 { 00:26:30.935 "method": "nvmf_subsystem_add_listener", 00:26:30.935 "params": { 00:26:30.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.935 "listen_address": { 00:26:30.935 "trtype": "TCP", 00:26:30.936 "adrfam": "IPv4", 00:26:30.936 "traddr": "10.0.0.2", 00:26:30.936 "trsvcid": "4420" 00:26:30.936 }, 00:26:30.936 "secure_channel": false, 00:26:30.936 "sock_impl": "ssl" 00:26:30.936 } 00:26:30.936 } 00:26:30.936 ] 00:26:30.936 } 00:26:30.936 ] 00:26:30.936 }' 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=449219 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 449219 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 449219 ']' 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.936 00:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.194 [2024-12-10 00:08:15.453543] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:31.194 [2024-12-10 00:08:15.453590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.194 [2024-12-10 00:08:15.547100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.194 [2024-12-10 00:08:15.583724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.194 [2024-12-10 00:08:15.583754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.195 [2024-12-10 00:08:15.583763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.195 [2024-12-10 00:08:15.583771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.195 [2024-12-10 00:08:15.583794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.195 [2024-12-10 00:08:15.584391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.453 [2024-12-10 00:08:15.799087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.453 [2024-12-10 00:08:15.831116] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:31.453 [2024-12-10 00:08:15.831336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=449492 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 449492 /var/tmp/bdevperf.sock 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 449492 ']' 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.021 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.022 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:26:32.022 "subsystems": [ 00:26:32.022 { 00:26:32.022 "subsystem": "keyring", 00:26:32.022 "config": [ 00:26:32.022 { 00:26:32.022 "method": "keyring_file_add_key", 00:26:32.022 "params": { 00:26:32.022 "name": "key0", 00:26:32.022 "path": "/tmp/tmp.lTb0vi6UrS" 00:26:32.022 } 00:26:32.022 } 00:26:32.022 ] 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "subsystem": "iobuf", 00:26:32.022 "config": [ 00:26:32.022 { 00:26:32.022 "method": "iobuf_set_options", 00:26:32.022 "params": { 00:26:32.022 "small_pool_count": 8192, 00:26:32.022 "large_pool_count": 1024, 00:26:32.022 "small_bufsize": 8192, 00:26:32.022 "large_bufsize": 135168, 00:26:32.022 "enable_numa": false 00:26:32.022 } 00:26:32.022 } 00:26:32.022 ] 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "subsystem": "sock", 00:26:32.022 "config": [ 00:26:32.022 { 00:26:32.022 "method": "sock_set_default_impl", 00:26:32.022 "params": { 00:26:32.022 "impl_name": "posix" 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "sock_impl_set_options", 00:26:32.022 "params": { 00:26:32.022 "impl_name": "ssl", 00:26:32.022 "recv_buf_size": 4096, 00:26:32.022 "send_buf_size": 4096, 00:26:32.022 "enable_recv_pipe": true, 00:26:32.022 "enable_quickack": false, 00:26:32.022 "enable_placement_id": 0, 00:26:32.022 "enable_zerocopy_send_server": true, 00:26:32.022 "enable_zerocopy_send_client": false, 00:26:32.022 "zerocopy_threshold": 0, 00:26:32.022 "tls_version": 0, 00:26:32.022 "enable_ktls": false 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "sock_impl_set_options", 00:26:32.022 "params": { 00:26:32.022 "impl_name": "posix", 00:26:32.022 "recv_buf_size": 2097152, 00:26:32.022 "send_buf_size": 2097152, 00:26:32.022 "enable_recv_pipe": true, 00:26:32.022 "enable_quickack": false, 00:26:32.022 "enable_placement_id": 0, 00:26:32.022 "enable_zerocopy_send_server": true, 00:26:32.022 "enable_zerocopy_send_client": false, 00:26:32.022 "zerocopy_threshold": 0, 00:26:32.022 "tls_version": 0, 00:26:32.022 "enable_ktls": false 00:26:32.022 } 00:26:32.022 } 00:26:32.022 ] 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "subsystem": "vmd", 00:26:32.022 "config": [] 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "subsystem": "accel", 00:26:32.022 "config": [ 00:26:32.022 { 00:26:32.022 "method": "accel_set_options", 00:26:32.022 "params": { 00:26:32.022 "small_cache_size": 128, 00:26:32.022 "large_cache_size": 16, 00:26:32.022 "task_count": 2048, 00:26:32.022 "sequence_count": 2048, 00:26:32.022 "buf_count": 2048 00:26:32.022 } 00:26:32.022 } 00:26:32.022 ] 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "subsystem": "bdev", 00:26:32.022 "config": [ 00:26:32.022 { 00:26:32.022 "method": "bdev_set_options", 00:26:32.022 "params": { 00:26:32.022 "bdev_io_pool_size": 65535, 00:26:32.022 "bdev_io_cache_size": 256, 00:26:32.022 "bdev_auto_examine": true, 00:26:32.022 "iobuf_small_cache_size": 128, 00:26:32.022 "iobuf_large_cache_size": 16 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "bdev_raid_set_options", 00:26:32.022 "params": { 00:26:32.022 "process_window_size_kb": 1024, 00:26:32.022 "process_max_bandwidth_mb_sec": 0 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "bdev_iscsi_set_options", 00:26:32.022 "params": { 00:26:32.022 "timeout_sec": 30 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "bdev_nvme_set_options", 00:26:32.022 "params": { 00:26:32.022 "action_on_timeout": "none", 00:26:32.022 "timeout_us": 0, 00:26:32.022 "timeout_admin_us": 0, 00:26:32.022 "keep_alive_timeout_ms": 10000, 00:26:32.022 "arbitration_burst": 0, 00:26:32.022 "low_priority_weight": 0, 00:26:32.022 "medium_priority_weight": 0, 00:26:32.022 "high_priority_weight": 0, 00:26:32.022 "nvme_adminq_poll_period_us": 10000, 00:26:32.022 "nvme_ioq_poll_period_us": 0, 00:26:32.022 "io_queue_requests": 512, 00:26:32.022 "delay_cmd_submit": true, 00:26:32.022 "transport_retry_count": 4, 00:26:32.022 "bdev_retry_count": 3, 00:26:32.022 "transport_ack_timeout": 0, 00:26:32.022 "ctrlr_loss_timeout_sec": 0, 00:26:32.022 "reconnect_delay_sec": 0, 00:26:32.022 "fast_io_fail_timeout_sec": 0, 00:26:32.022 "disable_auto_failback": false, 00:26:32.022 "generate_uuids": false, 00:26:32.022 "transport_tos": 0, 00:26:32.022 "nvme_error_stat": false, 00:26:32.022 "rdma_srq_size": 0, 00:26:32.022 "io_path_stat": false, 00:26:32.022 "allow_accel_sequence": false, 00:26:32.022 "rdma_max_cq_size": 0, 00:26:32.022 "rdma_cm_event_timeout_ms": 0, 00:26:32.022 "dhchap_digests": [ 00:26:32.022 "sha256", 00:26:32.022 "sha384", 00:26:32.022 "sha512" 00:26:32.022 ], 00:26:32.022 "dhchap_dhgroups": [ 00:26:32.022 "null", 00:26:32.022 "ffdhe2048", 00:26:32.022 "ffdhe3072", 00:26:32.022 "ffdhe4096", 00:26:32.022 "ffdhe6144", 00:26:32.022 "ffdhe8192" 00:26:32.022 ] 00:26:32.022 } 00:26:32.022 }, 00:26:32.022 { 00:26:32.022 "method": "bdev_nvme_attach_controller", 00:26:32.022 "params": { 00:26:32.022 "name": "nvme0", 00:26:32.022 "trtype": "TCP", 00:26:32.022 "adrfam": "IPv4", 00:26:32.022 "traddr": "10.0.0.2", 00:26:32.022 "trsvcid": "4420", 00:26:32.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.022 "prchk_reftag": false, 00:26:32.022 "prchk_guard": false, 00:26:32.022 "ctrlr_loss_timeout_sec": 0, 00:26:32.022 "reconnect_delay_sec": 0, 00:26:32.022 "fast_io_fail_timeout_sec": 0, 00:26:32.022 "psk": "key0", 00:26:32.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.022 "hdgst": false, 00:26:32.022 "ddgst": false, 00:26:32.022 "multipath": "multipath" 00:26:32.022 } 00:26:32.023 }, 00:26:32.023 { 00:26:32.023 "method": "bdev_nvme_set_hotplug", 00:26:32.023 "params": { 00:26:32.023 "period_us": 100000, 00:26:32.023 "enable": false 00:26:32.023 } 00:26:32.023 }, 00:26:32.023 { 00:26:32.023 "method": "bdev_enable_histogram", 00:26:32.023 "params": { 00:26:32.023 "name": "nvme0n1", 00:26:32.023 "enable": true 00:26:32.023 } 00:26:32.023 }, 00:26:32.023 { 00:26:32.023 "method": "bdev_wait_for_examine" 00:26:32.023 } 00:26:32.023 ] 00:26:32.023 }, 00:26:32.023 { 00:26:32.023 "subsystem": "nbd", 00:26:32.023 "config": [] 00:26:32.023 } 00:26:32.023 ] 00:26:32.023 }' 00:26:32.023 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:32.023 [2024-12-10 00:08:16.371665] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:32.023 [2024-12-10 00:08:16.371713] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449492 ] 00:26:32.023 [2024-12-10 00:08:16.461346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.281 [2024-12-10 00:08:16.500796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.281 [2024-12-10 00:08:16.653947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:32.849 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.849 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:26:32.849 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.849 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:26:33.108 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.108 00:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:33.108 Running I/O for 1 seconds... 00:26:34.305 5314.00 IOPS, 20.76 MiB/s 00:26:34.305 Latency(us) 00:26:34.305 [2024-12-09T23:08:18.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.305 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:34.305 Verification LBA range: start 0x0 length 0x2000 00:26:34.305 nvme0n1 : 1.01 5379.51 21.01 0.00 0.00 23642.42 4928.31 27682.41 00:26:34.305 [2024-12-09T23:08:18.778Z] =================================================================================================================== 00:26:34.305 [2024-12-09T23:08:18.778Z] Total : 5379.51 21.01 0.00 0.00 23642.42 4928.31 27682.41 00:26:34.305 { 00:26:34.305 "results": [ 00:26:34.305 { 00:26:34.305 "job": "nvme0n1", 00:26:34.305 "core_mask": "0x2", 00:26:34.305 "workload": "verify", 00:26:34.305 "status": "finished", 00:26:34.305 "verify_range": { 00:26:34.305 "start": 0, 00:26:34.305 "length": 8192 00:26:34.305 }, 00:26:34.305 "queue_depth": 128, 00:26:34.305 "io_size": 4096, 00:26:34.305 "runtime": 1.011616, 00:26:34.305 "iops": 5379.5115933318575, 00:26:34.305 "mibps": 21.01371716145257, 00:26:34.305 "io_failed": 0, 00:26:34.305 "io_timeout": 0, 00:26:34.305 "avg_latency_us": 23642.416678574053, 00:26:34.305 "min_latency_us": 4928.3072, 00:26:34.305 "max_latency_us": 27682.4064 00:26:34.305 } 00:26:34.305 ], 00:26:34.305 "core_count": 1 00:26:34.305 } 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:34.305 nvmf_trace.0 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 449492 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 449492 ']' 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 449492 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449492 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449492' 00:26:34.305 killing process with pid 449492 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 449492 00:26:34.305 Received shutdown signal, test time was about 1.000000 seconds 00:26:34.305 00:26:34.305 Latency(us) 00:26:34.305 [2024-12-09T23:08:18.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.305 [2024-12-09T23:08:18.778Z] =================================================================================================================== 00:26:34.305 [2024-12-09T23:08:18.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.305 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 449492 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.565 rmmod nvme_tcp 00:26:34.565 rmmod nvme_fabrics 00:26:34.565 rmmod nvme_keyring 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 449219 ']' 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 449219 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 449219 ']' 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 449219 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 449219 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 449219' 00:26:34.565 killing process with pid 449219 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 449219 00:26:34.565 00:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 449219 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.825 00:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Vb48a25VMw /tmp/tmp.gjtnVYyZ5k /tmp/tmp.lTb0vi6UrS 00:26:37.367 00:26:37.367 real 1m26.646s 00:26:37.367 user 2m8.690s 00:26:37.367 sys 0m34.704s 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:37.367 ************************************ 00:26:37.367 END TEST nvmf_tls 00:26:37.367 ************************************ 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:37.367 ************************************ 00:26:37.367 START TEST nvmf_fips 00:26:37.367 ************************************ 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:37.367 * Looking for test storage... 00:26:37.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:37.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.367 --rc genhtml_branch_coverage=1 00:26:37.367 --rc genhtml_function_coverage=1 00:26:37.367 --rc genhtml_legend=1 00:26:37.367 --rc geninfo_all_blocks=1 00:26:37.367 --rc geninfo_unexecuted_blocks=1 00:26:37.367 00:26:37.367 ' 00:26:37.367 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:37.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.367 --rc genhtml_branch_coverage=1 00:26:37.368 --rc genhtml_function_coverage=1 00:26:37.368 --rc genhtml_legend=1 00:26:37.368 --rc geninfo_all_blocks=1 00:26:37.368 --rc geninfo_unexecuted_blocks=1 00:26:37.368 00:26:37.368 ' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.368 --rc genhtml_branch_coverage=1 00:26:37.368 --rc genhtml_function_coverage=1 00:26:37.368 --rc genhtml_legend=1 00:26:37.368 --rc geninfo_all_blocks=1 00:26:37.368 --rc geninfo_unexecuted_blocks=1 00:26:37.368 00:26:37.368 ' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:37.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.368 --rc genhtml_branch_coverage=1 00:26:37.368 --rc genhtml_function_coverage=1 00:26:37.368 --rc genhtml_legend=1 00:26:37.368 --rc geninfo_all_blocks=1 00:26:37.368 --rc geninfo_unexecuted_blocks=1 00:26:37.368 00:26:37.368 ' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:37.368 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:26:37.369 Error setting digest 00:26:37.369 40425B501A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:37.369 40425B501A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.369 00:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:45.497 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:45.497 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.497 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:45.498 Found net devices under 0000:af:00.0: cvl_0_0 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:45.498 Found net devices under 0000:af:00.1: cvl_0_1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.498 00:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:26:45.498 00:26:45.498 --- 10.0.0.2 ping statistics --- 00:26:45.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.498 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:26:45.498 00:26:45.498 --- 10.0.0.1 ping statistics --- 00:26:45.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.498 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=453734 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 453734 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 453734 ']' 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.498 00:08:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:45.498 [2024-12-10 00:08:29.235105] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:45.498 [2024-12-10 00:08:29.235155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.498 [2024-12-10 00:08:29.326842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.498 [2024-12-10 00:08:29.367914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.498 [2024-12-10 00:08:29.367948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.498 [2024-12-10 00:08:29.367968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.498 [2024-12-10 00:08:29.367976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.498 [2024-12-10 00:08:29.367999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.498 [2024-12-10 00:08:29.368564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.qqu 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.qqu 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.qqu 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.qqu 00:26:45.756 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:46.015 [2024-12-10 00:08:30.285695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.015 [2024-12-10 00:08:30.301696] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:46.015 [2024-12-10 00:08:30.301901] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.015 malloc0 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=453966 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 453966 /var/tmp/bdevperf.sock 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 453966 ']' 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.015 00:08:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:46.015 [2024-12-10 00:08:30.435726] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:26:46.015 [2024-12-10 00:08:30.435780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453966 ] 00:26:46.273 [2024-12-10 00:08:30.525303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.273 [2024-12-10 00:08:30.564909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.837 00:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.837 00:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:26:46.837 00:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.qqu 00:26:47.094 00:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:47.350 [2024-12-10 00:08:31.607605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:47.350 TLSTESTn1 00:26:47.350 00:08:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:47.350 Running I/O for 10 seconds... 00:26:49.651 4865.00 IOPS, 19.00 MiB/s [2024-12-09T23:08:35.059Z] 5095.00 IOPS, 19.90 MiB/s [2024-12-09T23:08:35.991Z] 5186.00 IOPS, 20.26 MiB/s [2024-12-09T23:08:36.921Z] 5228.00 IOPS, 20.42 MiB/s [2024-12-09T23:08:37.852Z] 5239.80 IOPS, 20.47 MiB/s [2024-12-09T23:08:39.222Z] 5255.50 IOPS, 20.53 MiB/s [2024-12-09T23:08:40.154Z] 5265.29 IOPS, 20.57 MiB/s [2024-12-09T23:08:41.086Z] 5261.00 IOPS, 20.55 MiB/s [2024-12-09T23:08:42.019Z] 5275.67 IOPS, 20.61 MiB/s [2024-12-09T23:08:42.019Z] 5244.60 IOPS, 20.49 MiB/s 00:26:57.546 Latency(us) 00:26:57.546 [2024-12-09T23:08:42.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.546 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:57.546 Verification LBA range: start 0x0 length 0x2000 00:26:57.546 TLSTESTn1 : 10.04 5237.42 20.46 0.00 0.00 24382.56 6107.96 38587.60 00:26:57.546 [2024-12-09T23:08:42.019Z] =================================================================================================================== 00:26:57.546 [2024-12-09T23:08:42.019Z] Total : 5237.42 20.46 0.00 0.00 24382.56 6107.96 38587.60 00:26:57.546 { 00:26:57.546 "results": [ 00:26:57.546 { 00:26:57.546 "job": "TLSTESTn1", 00:26:57.546 "core_mask": "0x4", 00:26:57.546 "workload": "verify", 00:26:57.546 "status": "finished", 00:26:57.546 "verify_range": { 00:26:57.546 "start": 0, 00:26:57.546 "length": 8192 00:26:57.546 }, 00:26:57.546 "queue_depth": 128, 00:26:57.546 "io_size": 4096, 00:26:57.546 "runtime": 10.038145, 00:26:57.546 "iops": 5237.421854336633, 00:26:57.546 "mibps": 20.458679118502474, 00:26:57.546 "io_failed": 0, 00:26:57.546 "io_timeout": 0, 00:26:57.546 "avg_latency_us": 24382.557233978772, 00:26:57.546 "min_latency_us": 6107.9552, 00:26:57.546 "max_latency_us": 38587.5968 00:26:57.546 } 00:26:57.546 ], 00:26:57.546 "core_count": 1 00:26:57.546 } 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:57.546 nvmf_trace.0 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 453966 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 453966 ']' 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 453966 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.546 00:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453966 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453966' 00:26:57.803 killing process with pid 453966 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 453966 00:26:57.803 Received shutdown signal, test time was about 10.000000 seconds 00:26:57.803 00:26:57.803 Latency(us) 00:26:57.803 [2024-12-09T23:08:42.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.803 [2024-12-09T23:08:42.276Z] =================================================================================================================== 00:26:57.803 [2024-12-09T23:08:42.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 453966 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.803 rmmod nvme_tcp 00:26:57.803 rmmod nvme_fabrics 00:26:57.803 rmmod nvme_keyring 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 453734 ']' 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 453734 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 453734 ']' 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 453734 00:26:57.803 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453734 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453734' 00:26:58.062 killing process with pid 453734 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 453734 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 453734 00:26:58.062 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.063 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.qqu 00:27:00.602 00:27:00.602 real 0m23.269s 00:27:00.602 user 0m23.572s 00:27:00.602 sys 0m11.252s 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:00.602 ************************************ 00:27:00.602 END TEST nvmf_fips 00:27:00.602 ************************************ 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.602 ************************************ 00:27:00.602 START TEST nvmf_control_msg_list 00:27:00.602 ************************************ 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:00.602 * Looking for test storage... 00:27:00.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.602 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:00.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.603 --rc genhtml_branch_coverage=1 00:27:00.603 --rc genhtml_function_coverage=1 00:27:00.603 --rc genhtml_legend=1 00:27:00.603 --rc geninfo_all_blocks=1 00:27:00.603 --rc geninfo_unexecuted_blocks=1 00:27:00.603 00:27:00.603 ' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:00.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.603 --rc genhtml_branch_coverage=1 00:27:00.603 --rc genhtml_function_coverage=1 00:27:00.603 --rc genhtml_legend=1 00:27:00.603 --rc geninfo_all_blocks=1 00:27:00.603 --rc geninfo_unexecuted_blocks=1 00:27:00.603 00:27:00.603 ' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:00.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.603 --rc genhtml_branch_coverage=1 00:27:00.603 --rc genhtml_function_coverage=1 00:27:00.603 --rc genhtml_legend=1 00:27:00.603 --rc geninfo_all_blocks=1 00:27:00.603 --rc geninfo_unexecuted_blocks=1 00:27:00.603 00:27:00.603 ' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:00.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.603 --rc genhtml_branch_coverage=1 00:27:00.603 --rc genhtml_function_coverage=1 00:27:00.603 --rc genhtml_legend=1 00:27:00.603 --rc geninfo_all_blocks=1 00:27:00.603 --rc geninfo_unexecuted_blocks=1 00:27:00.603 00:27:00.603 ' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.603 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:08.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:08.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.747 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:08.748 Found net devices under 0000:af:00.0: cvl_0_0 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:08.748 Found net devices under 0000:af:00.1: cvl_0_1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.748 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:08.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:27:08.748 00:27:08.748 --- 10.0.0.2 ping statistics --- 00:27:08.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.748 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:08.748 00:27:08.748 --- 10.0.0.1 ping statistics --- 00:27:08.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.748 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:27:08.748 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=459558 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 459558 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 459558 ']' 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.749 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 [2024-12-10 00:08:52.236953] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:27:08.749 [2024-12-10 00:08:52.237020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.749 [2024-12-10 00:08:52.332269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.749 [2024-12-10 00:08:52.368541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.749 [2024-12-10 00:08:52.368576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.749 [2024-12-10 00:08:52.368585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.749 [2024-12-10 00:08:52.368593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.749 [2024-12-10 00:08:52.368600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.749 [2024-12-10 00:08:52.369187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 [2024-12-10 00:08:53.119224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 Malloc0 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:08.749 [2024-12-10 00:08:53.167587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=459835 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=459836 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:08.749 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=459837 00:27:08.750 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 459835 00:27:08.750 00:08:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:09.010 [2024-12-10 00:08:53.252089] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:09.010 [2024-12-10 00:08:53.261942] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:09.010 [2024-12-10 00:08:53.272032] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:09.939 Initializing NVMe Controllers 00:27:09.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:09.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:27:09.939 Initialization complete. Launching workers. 00:27:09.939 ======================================================== 00:27:09.939 Latency(us) 00:27:09.939 Device Information : IOPS MiB/s Average min max 00:27:09.939 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41089.56 40714.34 41958.92 00:27:09.939 ======================================================== 00:27:09.939 Total : 25.00 0.10 41089.56 40714.34 41958.92 00:27:09.939 00:27:10.202 Initializing NVMe Controllers 00:27:10.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:10.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:27:10.202 Initialization complete. Launching workers. 00:27:10.202 ======================================================== 00:27:10.202 Latency(us) 00:27:10.202 Device Information : IOPS MiB/s Average min max 00:27:10.202 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6125.99 23.93 162.90 133.49 432.88 00:27:10.202 ======================================================== 00:27:10.202 Total : 6125.99 23.93 162.90 133.49 432.88 00:27:10.202 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 459836 00:27:10.202 Initializing NVMe Controllers 00:27:10.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:10.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:27:10.202 Initialization complete. Launching workers. 00:27:10.202 ======================================================== 00:27:10.202 Latency(us) 00:27:10.202 Device Information : IOPS MiB/s Average min max 00:27:10.202 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6335.00 24.75 157.53 126.46 345.16 00:27:10.202 ======================================================== 00:27:10.202 Total : 6335.00 24.75 157.53 126.46 345.16 00:27:10.202 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 459837 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.202 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.203 rmmod nvme_tcp 00:27:10.203 rmmod nvme_fabrics 00:27:10.203 rmmod nvme_keyring 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 459558 ']' 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 459558 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 459558 ']' 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 459558 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459558 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459558' 00:27:10.203 killing process with pid 459558 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 459558 00:27:10.203 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 459558 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.463 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:13.000 00:27:13.000 real 0m12.238s 00:27:13.000 user 0m7.753s 00:27:13.000 sys 0m6.905s 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:13.000 ************************************ 00:27:13.000 END TEST nvmf_control_msg_list 00:27:13.000 ************************************ 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.000 00:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:13.000 ************************************ 00:27:13.000 START TEST nvmf_wait_for_buf 00:27:13.000 ************************************ 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:13.000 * Looking for test storage... 00:27:13.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.000 --rc genhtml_branch_coverage=1 00:27:13.000 --rc genhtml_function_coverage=1 00:27:13.000 --rc genhtml_legend=1 00:27:13.000 --rc geninfo_all_blocks=1 00:27:13.000 --rc geninfo_unexecuted_blocks=1 00:27:13.000 00:27:13.000 ' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.000 --rc genhtml_branch_coverage=1 00:27:13.000 --rc genhtml_function_coverage=1 00:27:13.000 --rc genhtml_legend=1 00:27:13.000 --rc geninfo_all_blocks=1 00:27:13.000 --rc geninfo_unexecuted_blocks=1 00:27:13.000 00:27:13.000 ' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.000 --rc genhtml_branch_coverage=1 00:27:13.000 --rc genhtml_function_coverage=1 00:27:13.000 --rc genhtml_legend=1 00:27:13.000 --rc geninfo_all_blocks=1 00:27:13.000 --rc geninfo_unexecuted_blocks=1 00:27:13.000 00:27:13.000 ' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.000 --rc genhtml_branch_coverage=1 00:27:13.000 --rc genhtml_function_coverage=1 00:27:13.000 --rc genhtml_legend=1 00:27:13.000 --rc geninfo_all_blocks=1 00:27:13.000 --rc geninfo_unexecuted_blocks=1 00:27:13.000 00:27:13.000 ' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.000 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:13.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:13.001 00:08:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.125 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.125 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.125 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.125 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.125 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:21.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:21.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:21.126 Found net devices under 0000:af:00.0: cvl_0_0 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:21.126 Found net devices under 0000:af:00.1: cvl_0_1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:27:21.126 00:27:21.126 --- 10.0.0.2 ping statistics --- 00:27:21.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.126 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:27:21.126 00:27:21.126 --- 10.0.0.1 ping statistics --- 00:27:21.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.126 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.126 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=464087 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 464087 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 464087 ']' 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.127 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 [2024-12-10 00:09:04.551269] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:27:21.127 [2024-12-10 00:09:04.551324] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.127 [2024-12-10 00:09:04.648813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.127 [2024-12-10 00:09:04.688650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.127 [2024-12-10 00:09:04.688687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.127 [2024-12-10 00:09:04.688696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.127 [2024-12-10 00:09:04.688708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.127 [2024-12-10 00:09:04.688715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.127 [2024-12-10 00:09:04.689298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 Malloc0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 [2024-12-10 00:09:05.529361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:21.127 [2024-12-10 00:09:05.557561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.127 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.385 [2024-12-10 00:09:05.647898] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:22.766 Initializing NVMe Controllers 00:27:22.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:22.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:27:22.766 Initialization complete. Launching workers. 00:27:22.766 ======================================================== 00:27:22.766 Latency(us) 00:27:22.766 Device Information : IOPS MiB/s Average min max 00:27:22.766 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.00 15.88 32810.75 7283.19 63854.15 00:27:22.766 ======================================================== 00:27:22.766 Total : 127.00 15.88 32810.75 7283.19 63854.15 00:27:22.766 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.766 rmmod nvme_tcp 00:27:22.766 rmmod nvme_fabrics 00:27:22.766 rmmod nvme_keyring 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 464087 ']' 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 464087 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 464087 ']' 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 464087 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.766 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464087 00:27:23.025 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.025 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.025 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464087' 00:27:23.025 killing process with pid 464087 00:27:23.025 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 464087 00:27:23.025 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 464087 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.026 00:09:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.559 00:27:25.559 real 0m12.512s 00:27:25.559 user 0m4.980s 00:27:25.559 sys 0m6.247s 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:25.559 ************************************ 00:27:25.559 END TEST nvmf_wait_for_buf 00:27:25.559 ************************************ 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.559 00:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:32.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.132 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:32.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:32.133 Found net devices under 0000:af:00.0: cvl_0_0 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:32.133 Found net devices under 0000:af:00.1: cvl_0_1 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:32.133 ************************************ 00:27:32.133 START TEST nvmf_perf_adq 00:27:32.133 ************************************ 00:27:32.133 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:32.392 * Looking for test storage... 00:27:32.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:32.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.393 --rc genhtml_branch_coverage=1 00:27:32.393 --rc genhtml_function_coverage=1 00:27:32.393 --rc genhtml_legend=1 00:27:32.393 --rc geninfo_all_blocks=1 00:27:32.393 --rc geninfo_unexecuted_blocks=1 00:27:32.393 00:27:32.393 ' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:32.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.393 --rc genhtml_branch_coverage=1 00:27:32.393 --rc genhtml_function_coverage=1 00:27:32.393 --rc genhtml_legend=1 00:27:32.393 --rc geninfo_all_blocks=1 00:27:32.393 --rc geninfo_unexecuted_blocks=1 00:27:32.393 00:27:32.393 ' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:32.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.393 --rc genhtml_branch_coverage=1 00:27:32.393 --rc genhtml_function_coverage=1 00:27:32.393 --rc genhtml_legend=1 00:27:32.393 --rc geninfo_all_blocks=1 00:27:32.393 --rc geninfo_unexecuted_blocks=1 00:27:32.393 00:27:32.393 ' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:32.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.393 --rc genhtml_branch_coverage=1 00:27:32.393 --rc genhtml_function_coverage=1 00:27:32.393 --rc genhtml_legend=1 00:27:32.393 --rc geninfo_all_blocks=1 00:27:32.393 --rc geninfo_unexecuted_blocks=1 00:27:32.393 00:27:32.393 ' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.393 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.516 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:40.517 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:40.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:40.517 Found net devices under 0000:af:00.0: cvl_0_0 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:40.517 Found net devices under 0000:af:00.1: cvl_0_1 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:40.517 00:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:40.777 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:44.067 00:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.350 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:49.351 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:49.351 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:49.351 Found net devices under 0000:af:00.0: cvl_0_0 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:49.351 Found net devices under 0000:af:00.1: cvl_0_1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:27:49.351 00:27:49.351 --- 10.0.0.2 ping statistics --- 00:27:49.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.351 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:49.351 00:27:49.351 --- 10.0.0.1 ping statistics --- 00:27:49.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.351 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=473457 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 473457 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 473457 ']' 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.351 00:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.351 [2024-12-10 00:09:33.730632] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:27:49.351 [2024-12-10 00:09:33.730676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.351 [2024-12-10 00:09:33.823475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.610 [2024-12-10 00:09:33.863265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.610 [2024-12-10 00:09:33.863305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.610 [2024-12-10 00:09:33.863314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.610 [2024-12-10 00:09:33.863322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.610 [2024-12-10 00:09:33.863329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.610 [2024-12-10 00:09:33.865100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.610 [2024-12-10 00:09:33.865213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.610 [2024-12-10 00:09:33.865299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.610 [2024-12-10 00:09:33.865300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.176 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:50.434 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 [2024-12-10 00:09:34.748873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 Malloc1 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 [2024-12-10 00:09:34.810708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=473740 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:50.435 00:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:52.970 "tick_rate": 2500000000, 00:27:52.970 "poll_groups": [ 00:27:52.970 { 00:27:52.970 "name": "nvmf_tgt_poll_group_000", 00:27:52.970 "admin_qpairs": 1, 00:27:52.970 "io_qpairs": 1, 00:27:52.970 "current_admin_qpairs": 1, 00:27:52.970 "current_io_qpairs": 1, 00:27:52.970 "pending_bdev_io": 0, 00:27:52.970 "completed_nvme_io": 19678, 00:27:52.970 "transports": [ 00:27:52.970 { 00:27:52.970 "trtype": "TCP" 00:27:52.970 } 00:27:52.970 ] 00:27:52.970 }, 00:27:52.970 { 00:27:52.970 "name": "nvmf_tgt_poll_group_001", 00:27:52.970 "admin_qpairs": 0, 00:27:52.970 "io_qpairs": 1, 00:27:52.970 "current_admin_qpairs": 0, 00:27:52.970 "current_io_qpairs": 1, 00:27:52.970 "pending_bdev_io": 0, 00:27:52.970 "completed_nvme_io": 19397, 00:27:52.970 "transports": [ 00:27:52.970 { 00:27:52.970 "trtype": "TCP" 00:27:52.970 } 00:27:52.970 ] 00:27:52.970 }, 00:27:52.970 { 00:27:52.970 "name": "nvmf_tgt_poll_group_002", 00:27:52.970 "admin_qpairs": 0, 00:27:52.970 "io_qpairs": 1, 00:27:52.970 "current_admin_qpairs": 0, 00:27:52.970 "current_io_qpairs": 1, 00:27:52.970 "pending_bdev_io": 0, 00:27:52.970 "completed_nvme_io": 19811, 00:27:52.970 "transports": [ 00:27:52.970 { 00:27:52.970 "trtype": "TCP" 00:27:52.970 } 00:27:52.970 ] 00:27:52.970 }, 00:27:52.970 { 00:27:52.970 "name": "nvmf_tgt_poll_group_003", 00:27:52.970 "admin_qpairs": 0, 00:27:52.970 "io_qpairs": 1, 00:27:52.970 "current_admin_qpairs": 0, 00:27:52.970 "current_io_qpairs": 1, 00:27:52.970 "pending_bdev_io": 0, 00:27:52.970 "completed_nvme_io": 19731, 00:27:52.970 "transports": [ 00:27:52.970 { 00:27:52.970 "trtype": "TCP" 00:27:52.970 } 00:27:52.970 ] 00:27:52.970 } 00:27:52.970 ] 00:27:52.970 }' 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:52.970 00:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 473740 00:28:01.084 Initializing NVMe Controllers 00:28:01.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:01.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:01.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:01.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:01.084 Initialization complete. Launching workers. 00:28:01.084 ======================================================== 00:28:01.084 Latency(us) 00:28:01.084 Device Information : IOPS MiB/s Average min max 00:28:01.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10431.00 40.75 6135.13 1688.10 10244.72 00:28:01.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10420.90 40.71 6154.56 1302.26 44141.15 00:28:01.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10714.90 41.86 5973.00 2265.03 12763.71 00:28:01.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10555.80 41.23 6063.62 2309.31 9960.79 00:28:01.084 ======================================================== 00:28:01.084 Total : 42122.60 164.54 6080.77 1302.26 44141.15 00:28:01.084 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.084 rmmod nvme_tcp 00:28:01.084 rmmod nvme_fabrics 00:28:01.084 rmmod nvme_keyring 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 473457 ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 473457 ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473457' 00:28:01.084 killing process with pid 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 473457 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.084 00:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.991 00:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:02.991 00:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:02.991 00:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.991 00:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:04.370 00:09:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:06.901 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.182 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.183 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.184 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.184 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.184 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.185 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.185 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.185 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.186 00:09:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:28:12.186 00:28:12.186 --- 10.0.0.2 ping statistics --- 00:28:12.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.186 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:28:12.186 00:28:12.186 --- 10.0.0.1 ping statistics --- 00:28:12.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.186 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:12.186 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:12.186 net.core.busy_poll = 1 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:12.187 net.core.busy_read = 1 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=477565 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 477565 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 477565 ']' 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.187 00:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.187 [2024-12-10 00:09:56.565411] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:12.187 [2024-12-10 00:09:56.565461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.450 [2024-12-10 00:09:56.662106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.450 [2024-12-10 00:09:56.701264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.450 [2024-12-10 00:09:56.701310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.450 [2024-12-10 00:09:56.701320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.450 [2024-12-10 00:09:56.701328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.450 [2024-12-10 00:09:56.701335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.450 [2024-12-10 00:09:56.703064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.450 [2024-12-10 00:09:56.703177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.450 [2024-12-10 00:09:56.703262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.450 [2024-12-10 00:09:56.703263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:13.016 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 [2024-12-10 00:09:57.582663] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 Malloc1 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.274 [2024-12-10 00:09:57.653199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=477832 00:28:13.274 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:13.275 00:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:15.804 "tick_rate": 2500000000, 00:28:15.804 "poll_groups": [ 00:28:15.804 { 00:28:15.804 "name": "nvmf_tgt_poll_group_000", 00:28:15.804 "admin_qpairs": 1, 00:28:15.804 "io_qpairs": 3, 00:28:15.804 "current_admin_qpairs": 1, 00:28:15.804 "current_io_qpairs": 3, 00:28:15.804 "pending_bdev_io": 0, 00:28:15.804 "completed_nvme_io": 30725, 00:28:15.804 "transports": [ 00:28:15.804 { 00:28:15.804 "trtype": "TCP" 00:28:15.804 } 00:28:15.804 ] 00:28:15.804 }, 00:28:15.804 { 00:28:15.804 "name": "nvmf_tgt_poll_group_001", 00:28:15.804 "admin_qpairs": 0, 00:28:15.804 "io_qpairs": 1, 00:28:15.804 "current_admin_qpairs": 0, 00:28:15.804 "current_io_qpairs": 1, 00:28:15.804 "pending_bdev_io": 0, 00:28:15.804 "completed_nvme_io": 28234, 00:28:15.804 "transports": [ 00:28:15.804 { 00:28:15.804 "trtype": "TCP" 00:28:15.804 } 00:28:15.804 ] 00:28:15.804 }, 00:28:15.804 { 00:28:15.804 "name": "nvmf_tgt_poll_group_002", 00:28:15.804 "admin_qpairs": 0, 00:28:15.804 "io_qpairs": 0, 00:28:15.804 "current_admin_qpairs": 0, 00:28:15.804 "current_io_qpairs": 0, 00:28:15.804 "pending_bdev_io": 0, 00:28:15.804 "completed_nvme_io": 0, 00:28:15.804 "transports": [ 00:28:15.804 { 00:28:15.804 "trtype": "TCP" 00:28:15.804 } 00:28:15.804 ] 00:28:15.804 }, 00:28:15.804 { 00:28:15.804 "name": "nvmf_tgt_poll_group_003", 00:28:15.804 "admin_qpairs": 0, 00:28:15.804 "io_qpairs": 0, 00:28:15.804 "current_admin_qpairs": 0, 00:28:15.804 "current_io_qpairs": 0, 00:28:15.804 "pending_bdev_io": 0, 00:28:15.804 "completed_nvme_io": 0, 00:28:15.804 "transports": [ 00:28:15.804 { 00:28:15.804 "trtype": "TCP" 00:28:15.804 } 00:28:15.804 ] 00:28:15.804 } 00:28:15.804 ] 00:28:15.804 }' 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:15.804 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 477832 00:28:23.920 Initializing NVMe Controllers 00:28:23.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:23.920 Initialization complete. Launching workers. 00:28:23.920 ======================================================== 00:28:23.920 Latency(us) 00:28:23.920 Device Information : IOPS MiB/s Average min max 00:28:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 15457.00 60.38 4139.96 1564.89 6430.47 00:28:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5573.80 21.77 11484.43 1669.13 58176.34 00:28:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5211.20 20.36 12283.75 1724.79 57299.84 00:28:23.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4826.90 18.86 13265.83 1668.01 57659.86 00:28:23.920 ======================================================== 00:28:23.920 Total : 31068.90 121.36 8241.33 1564.89 58176.34 00:28:23.920 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.920 rmmod nvme_tcp 00:28:23.920 rmmod nvme_fabrics 00:28:23.920 rmmod nvme_keyring 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.920 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 477565 ']' 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 477565 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 477565 ']' 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 477565 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477565 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477565' 00:28:23.921 killing process with pid 477565 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 477565 00:28:23.921 00:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 477565 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.921 00:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:27.456 00:28:27.456 real 0m54.720s 00:28:27.456 user 2m47.893s 00:28:27.456 sys 0m15.051s 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.456 ************************************ 00:28:27.456 END TEST nvmf_perf_adq 00:28:27.456 ************************************ 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:27.456 ************************************ 00:28:27.456 START TEST nvmf_shutdown 00:28:27.456 ************************************ 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:27.456 * Looking for test storage... 00:28:27.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:27.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.456 --rc genhtml_branch_coverage=1 00:28:27.456 --rc genhtml_function_coverage=1 00:28:27.456 --rc genhtml_legend=1 00:28:27.456 --rc geninfo_all_blocks=1 00:28:27.456 --rc geninfo_unexecuted_blocks=1 00:28:27.456 00:28:27.456 ' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:27.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.456 --rc genhtml_branch_coverage=1 00:28:27.456 --rc genhtml_function_coverage=1 00:28:27.456 --rc genhtml_legend=1 00:28:27.456 --rc geninfo_all_blocks=1 00:28:27.456 --rc geninfo_unexecuted_blocks=1 00:28:27.456 00:28:27.456 ' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:27.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.456 --rc genhtml_branch_coverage=1 00:28:27.456 --rc genhtml_function_coverage=1 00:28:27.456 --rc genhtml_legend=1 00:28:27.456 --rc geninfo_all_blocks=1 00:28:27.456 --rc geninfo_unexecuted_blocks=1 00:28:27.456 00:28:27.456 ' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:27.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.456 --rc genhtml_branch_coverage=1 00:28:27.456 --rc genhtml_function_coverage=1 00:28:27.456 --rc genhtml_legend=1 00:28:27.456 --rc geninfo_all_blocks=1 00:28:27.456 --rc geninfo_unexecuted_blocks=1 00:28:27.456 00:28:27.456 ' 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.456 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:27.457 ************************************ 00:28:27.457 START TEST nvmf_shutdown_tc1 00:28:27.457 ************************************ 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.457 00:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:35.649 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:35.649 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:35.649 Found net devices under 0000:af:00.0: cvl_0_0 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:35.649 Found net devices under 0000:af:00.1: cvl_0_1 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:35.649 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:28:35.650 00:28:35.650 --- 10.0.0.2 ping statistics --- 00:28:35.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.650 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:28:35.650 00:28:35.650 --- 10.0.0.1 ping statistics --- 00:28:35.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.650 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=483507 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 483507 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 483507 ']' 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.650 00:10:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.650 [2024-12-10 00:10:19.016950] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:35.650 [2024-12-10 00:10:19.016998] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.650 [2024-12-10 00:10:19.116228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.650 [2024-12-10 00:10:19.154858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.650 [2024-12-10 00:10:19.154900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.650 [2024-12-10 00:10:19.154909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.650 [2024-12-10 00:10:19.154917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.650 [2024-12-10 00:10:19.154924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.650 [2024-12-10 00:10:19.156670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.650 [2024-12-10 00:10:19.156777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.650 [2024-12-10 00:10:19.156865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.650 [2024-12-10 00:10:19.156866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.650 [2024-12-10 00:10:19.900600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.650 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.651 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.651 Malloc1 00:28:35.651 [2024-12-10 00:10:20.029888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.651 Malloc2 00:28:35.651 Malloc3 00:28:35.909 Malloc4 00:28:35.909 Malloc5 00:28:35.909 Malloc6 00:28:35.909 Malloc7 00:28:35.909 Malloc8 00:28:35.909 Malloc9 00:28:36.168 Malloc10 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=483798 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 483798 /var/tmp/bdevperf.sock 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 483798 ']' 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.168 "adrfam": "ipv4", 00:28:36.168 "trsvcid": "$NVMF_PORT", 00:28:36.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.168 "hdgst": ${hdgst:-false}, 00:28:36.168 "ddgst": ${ddgst:-false} 00:28:36.168 }, 00:28:36.168 "method": "bdev_nvme_attach_controller" 00:28:36.168 } 00:28:36.168 EOF 00:28:36.168 )") 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.168 "adrfam": "ipv4", 00:28:36.168 "trsvcid": "$NVMF_PORT", 00:28:36.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.168 "hdgst": ${hdgst:-false}, 00:28:36.168 "ddgst": ${ddgst:-false} 00:28:36.168 }, 00:28:36.168 "method": "bdev_nvme_attach_controller" 00:28:36.168 } 00:28:36.168 EOF 00:28:36.168 )") 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.168 "adrfam": "ipv4", 00:28:36.168 "trsvcid": "$NVMF_PORT", 00:28:36.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.168 "hdgst": ${hdgst:-false}, 00:28:36.168 "ddgst": ${ddgst:-false} 00:28:36.168 }, 00:28:36.168 "method": "bdev_nvme_attach_controller" 00:28:36.168 } 00:28:36.168 EOF 00:28:36.168 )") 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.168 "adrfam": "ipv4", 00:28:36.168 "trsvcid": "$NVMF_PORT", 00:28:36.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.168 "hdgst": ${hdgst:-false}, 00:28:36.168 "ddgst": ${ddgst:-false} 00:28:36.168 }, 00:28:36.168 "method": "bdev_nvme_attach_controller" 00:28:36.168 } 00:28:36.168 EOF 00:28:36.168 )") 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.168 "adrfam": "ipv4", 00:28:36.168 "trsvcid": "$NVMF_PORT", 00:28:36.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.168 "hdgst": ${hdgst:-false}, 00:28:36.168 "ddgst": ${ddgst:-false} 00:28:36.168 }, 00:28:36.168 "method": "bdev_nvme_attach_controller" 00:28:36.168 } 00:28:36.168 EOF 00:28:36.168 )") 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.168 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.168 { 00:28:36.168 "params": { 00:28:36.168 "name": "Nvme$subsystem", 00:28:36.168 "trtype": "$TEST_TRANSPORT", 00:28:36.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "$NVMF_PORT", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.169 "hdgst": ${hdgst:-false}, 00:28:36.169 "ddgst": ${ddgst:-false} 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 } 00:28:36.169 EOF 00:28:36.169 )") 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.169 [2024-12-10 00:10:20.521466] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:36.169 [2024-12-10 00:10:20.521518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.169 { 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme$subsystem", 00:28:36.169 "trtype": "$TEST_TRANSPORT", 00:28:36.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "$NVMF_PORT", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.169 "hdgst": ${hdgst:-false}, 00:28:36.169 "ddgst": ${ddgst:-false} 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 } 00:28:36.169 EOF 00:28:36.169 )") 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.169 { 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme$subsystem", 00:28:36.169 "trtype": "$TEST_TRANSPORT", 00:28:36.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "$NVMF_PORT", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.169 "hdgst": ${hdgst:-false}, 00:28:36.169 "ddgst": ${ddgst:-false} 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 } 00:28:36.169 EOF 00:28:36.169 )") 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.169 { 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme$subsystem", 00:28:36.169 "trtype": "$TEST_TRANSPORT", 00:28:36.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "$NVMF_PORT", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.169 "hdgst": ${hdgst:-false}, 00:28:36.169 "ddgst": ${ddgst:-false} 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 } 00:28:36.169 EOF 00:28:36.169 )") 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.169 { 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme$subsystem", 00:28:36.169 "trtype": "$TEST_TRANSPORT", 00:28:36.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "$NVMF_PORT", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.169 "hdgst": ${hdgst:-false}, 00:28:36.169 "ddgst": ${ddgst:-false} 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 } 00:28:36.169 EOF 00:28:36.169 )") 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:36.169 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme1", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme2", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme3", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme4", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme5", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme6", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme7", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme8", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme9", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 },{ 00:28:36.169 "params": { 00:28:36.169 "name": "Nvme10", 00:28:36.169 "trtype": "tcp", 00:28:36.169 "traddr": "10.0.0.2", 00:28:36.169 "adrfam": "ipv4", 00:28:36.169 "trsvcid": "4420", 00:28:36.169 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:36.169 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:36.169 "hdgst": false, 00:28:36.169 "ddgst": false 00:28:36.169 }, 00:28:36.169 "method": "bdev_nvme_attach_controller" 00:28:36.169 }' 00:28:36.169 [2024-12-10 00:10:20.616370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.428 [2024-12-10 00:10:20.656531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 483798 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:37.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 483798 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:37.802 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 483507 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.737 { 00:28:38.737 "params": { 00:28:38.737 "name": "Nvme$subsystem", 00:28:38.737 "trtype": "$TEST_TRANSPORT", 00:28:38.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.737 "adrfam": "ipv4", 00:28:38.737 "trsvcid": "$NVMF_PORT", 00:28:38.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.737 "hdgst": ${hdgst:-false}, 00:28:38.737 "ddgst": ${ddgst:-false} 00:28:38.737 }, 00:28:38.737 "method": "bdev_nvme_attach_controller" 00:28:38.737 } 00:28:38.737 EOF 00:28:38.737 )") 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.737 { 00:28:38.737 "params": { 00:28:38.737 "name": "Nvme$subsystem", 00:28:38.737 "trtype": "$TEST_TRANSPORT", 00:28:38.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.737 "adrfam": "ipv4", 00:28:38.737 "trsvcid": "$NVMF_PORT", 00:28:38.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.737 "hdgst": ${hdgst:-false}, 00:28:38.737 "ddgst": ${ddgst:-false} 00:28:38.737 }, 00:28:38.737 "method": "bdev_nvme_attach_controller" 00:28:38.737 } 00:28:38.737 EOF 00:28:38.737 )") 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.737 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 [2024-12-10 00:10:22.911243] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:38.738 [2024-12-10 00:10:22.911295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484337 ] 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.738 { 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme$subsystem", 00:28:38.738 "trtype": "$TEST_TRANSPORT", 00:28:38.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "$NVMF_PORT", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.738 "hdgst": ${hdgst:-false}, 00:28:38.738 "ddgst": ${ddgst:-false} 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 } 00:28:38.738 EOF 00:28:38.738 )") 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:38.738 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme1", 00:28:38.738 "trtype": "tcp", 00:28:38.738 "traddr": "10.0.0.2", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "4420", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.738 "hdgst": false, 00:28:38.738 "ddgst": false 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 },{ 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme2", 00:28:38.738 "trtype": "tcp", 00:28:38.738 "traddr": "10.0.0.2", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "4420", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:38.738 "hdgst": false, 00:28:38.738 "ddgst": false 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 },{ 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme3", 00:28:38.738 "trtype": "tcp", 00:28:38.738 "traddr": "10.0.0.2", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "4420", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:38.738 "hdgst": false, 00:28:38.738 "ddgst": false 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 },{ 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme4", 00:28:38.738 "trtype": "tcp", 00:28:38.738 "traddr": "10.0.0.2", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "4420", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:38.738 "hdgst": false, 00:28:38.738 "ddgst": false 00:28:38.738 }, 00:28:38.738 "method": "bdev_nvme_attach_controller" 00:28:38.738 },{ 00:28:38.738 "params": { 00:28:38.738 "name": "Nvme5", 00:28:38.738 "trtype": "tcp", 00:28:38.738 "traddr": "10.0.0.2", 00:28:38.738 "adrfam": "ipv4", 00:28:38.738 "trsvcid": "4420", 00:28:38.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:38.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:38.738 "hdgst": false, 00:28:38.738 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 },{ 00:28:38.739 "params": { 00:28:38.739 "name": "Nvme6", 00:28:38.739 "trtype": "tcp", 00:28:38.739 "traddr": "10.0.0.2", 00:28:38.739 "adrfam": "ipv4", 00:28:38.739 "trsvcid": "4420", 00:28:38.739 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:38.739 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:38.739 "hdgst": false, 00:28:38.739 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 },{ 00:28:38.739 "params": { 00:28:38.739 "name": "Nvme7", 00:28:38.739 "trtype": "tcp", 00:28:38.739 "traddr": "10.0.0.2", 00:28:38.739 "adrfam": "ipv4", 00:28:38.739 "trsvcid": "4420", 00:28:38.739 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:38.739 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:38.739 "hdgst": false, 00:28:38.739 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 },{ 00:28:38.739 "params": { 00:28:38.739 "name": "Nvme8", 00:28:38.739 "trtype": "tcp", 00:28:38.739 "traddr": "10.0.0.2", 00:28:38.739 "adrfam": "ipv4", 00:28:38.739 "trsvcid": "4420", 00:28:38.739 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:38.739 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:38.739 "hdgst": false, 00:28:38.739 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 },{ 00:28:38.739 "params": { 00:28:38.739 "name": "Nvme9", 00:28:38.739 "trtype": "tcp", 00:28:38.739 "traddr": "10.0.0.2", 00:28:38.739 "adrfam": "ipv4", 00:28:38.739 "trsvcid": "4420", 00:28:38.739 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:38.739 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:38.739 "hdgst": false, 00:28:38.739 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 },{ 00:28:38.739 "params": { 00:28:38.739 "name": "Nvme10", 00:28:38.739 "trtype": "tcp", 00:28:38.739 "traddr": "10.0.0.2", 00:28:38.739 "adrfam": "ipv4", 00:28:38.739 "trsvcid": "4420", 00:28:38.739 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:38.739 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:38.739 "hdgst": false, 00:28:38.739 "ddgst": false 00:28:38.739 }, 00:28:38.739 "method": "bdev_nvme_attach_controller" 00:28:38.739 }' 00:28:38.739 [2024-12-10 00:10:23.003245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.739 [2024-12-10 00:10:23.042457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.112 Running I/O for 1 seconds... 00:28:41.309 2327.00 IOPS, 145.44 MiB/s 00:28:41.309 Latency(us) 00:28:41.309 [2024-12-09T23:10:25.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.309 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.309 Verification LBA range: start 0x0 length 0x400 00:28:41.309 Nvme1n1 : 1.11 292.33 18.27 0.00 0.00 216235.73 9017.75 203843.17 00:28:41.309 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.309 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme2n1 : 1.13 284.29 17.77 0.00 0.00 220224.23 15099.49 214748.36 00:28:41.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme3n1 : 1.10 291.66 18.23 0.00 0.00 211520.96 11953.77 206359.76 00:28:41.310 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme4n1 : 1.10 293.24 18.33 0.00 0.00 206814.95 5347.74 197971.15 00:28:41.310 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme5n1 : 1.13 286.77 17.92 0.00 0.00 209187.27 2398.62 208876.34 00:28:41.310 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme6n1 : 1.13 282.61 17.66 0.00 0.00 209506.96 16882.07 223136.97 00:28:41.310 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme7n1 : 1.11 295.22 18.45 0.00 0.00 196580.79 4037.02 203004.31 00:28:41.310 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme8n1 : 1.12 285.72 17.86 0.00 0.00 201128.43 13316.92 203843.17 00:28:41.310 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme9n1 : 1.14 281.93 17.62 0.00 0.00 201150.79 18245.22 218103.81 00:28:41.310 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.310 Verification LBA range: start 0x0 length 0x400 00:28:41.310 Nvme10n1 : 1.14 281.23 17.58 0.00 0.00 198685.82 16357.79 231525.58 00:28:41.310 [2024-12-09T23:10:25.783Z] =================================================================================================================== 00:28:41.310 [2024-12-09T23:10:25.783Z] Total : 2875.00 179.69 0.00 0.00 207090.60 2398.62 231525.58 00:28:41.310 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:41.310 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:41.310 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.569 rmmod nvme_tcp 00:28:41.569 rmmod nvme_fabrics 00:28:41.569 rmmod nvme_keyring 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 483507 ']' 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 483507 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 483507 ']' 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 483507 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483507 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483507' 00:28:41.569 killing process with pid 483507 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 483507 00:28:41.569 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 483507 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.828 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.087 00:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.993 00:28:43.993 real 0m16.785s 00:28:43.993 user 0m34.240s 00:28:43.993 sys 0m7.164s 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.993 ************************************ 00:28:43.993 END TEST nvmf_shutdown_tc1 00:28:43.993 ************************************ 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.993 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.253 ************************************ 00:28:44.253 START TEST nvmf_shutdown_tc2 00:28:44.253 ************************************ 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:44.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:44.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:44.253 Found net devices under 0000:af:00.0: cvl_0_0 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:44.253 Found net devices under 0000:af:00.1: cvl_0_1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.253 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.254 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:28:44.512 00:28:44.512 --- 10.0.0.2 ping statistics --- 00:28:44.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.512 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:28:44.512 00:28:44.512 --- 10.0.0.1 ping statistics --- 00:28:44.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.512 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.512 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=485409 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 485409 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 485409 ']' 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.513 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.513 [2024-12-10 00:10:28.890804] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:44.513 [2024-12-10 00:10:28.890858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.771 [2024-12-10 00:10:28.986480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.771 [2024-12-10 00:10:29.028455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.771 [2024-12-10 00:10:29.028496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.771 [2024-12-10 00:10:29.028506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.771 [2024-12-10 00:10:29.028515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.771 [2024-12-10 00:10:29.028522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.771 [2024-12-10 00:10:29.030351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.771 [2024-12-10 00:10:29.030462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.771 [2024-12-10 00:10:29.030570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.771 [2024-12-10 00:10:29.030571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.336 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.337 [2024-12-10 00:10:29.777422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.337 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.596 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.596 Malloc1 00:28:45.596 [2024-12-10 00:10:29.903332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.596 Malloc2 00:28:45.596 Malloc3 00:28:45.596 Malloc4 00:28:45.596 Malloc5 00:28:45.854 Malloc6 00:28:45.854 Malloc7 00:28:45.854 Malloc8 00:28:45.854 Malloc9 00:28:45.854 Malloc10 00:28:45.854 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.854 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:45.854 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.854 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=485690 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 485690 /var/tmp/bdevperf.sock 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 485690 ']' 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.113 "method": "bdev_nvme_attach_controller" 00:28:46.113 } 00:28:46.113 EOF 00:28:46.113 )") 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.113 "method": "bdev_nvme_attach_controller" 00:28:46.113 } 00:28:46.113 EOF 00:28:46.113 )") 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.113 "method": "bdev_nvme_attach_controller" 00:28:46.113 } 00:28:46.113 EOF 00:28:46.113 )") 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.113 "method": "bdev_nvme_attach_controller" 00:28:46.113 } 00:28:46.113 EOF 00:28:46.113 )") 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.113 "method": "bdev_nvme_attach_controller" 00:28:46.113 } 00:28:46.113 EOF 00:28:46.113 )") 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.113 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.113 { 00:28:46.113 "params": { 00:28:46.113 "name": "Nvme$subsystem", 00:28:46.113 "trtype": "$TEST_TRANSPORT", 00:28:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.113 "adrfam": "ipv4", 00:28:46.113 "trsvcid": "$NVMF_PORT", 00:28:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.113 "hdgst": ${hdgst:-false}, 00:28:46.113 "ddgst": ${ddgst:-false} 00:28:46.113 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 } 00:28:46.114 EOF 00:28:46.114 )") 00:28:46.114 [2024-12-10 00:10:30.388421] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:46.114 [2024-12-10 00:10:30.388474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485690 ] 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.114 { 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme$subsystem", 00:28:46.114 "trtype": "$TEST_TRANSPORT", 00:28:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "$NVMF_PORT", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.114 "hdgst": ${hdgst:-false}, 00:28:46.114 "ddgst": ${ddgst:-false} 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 } 00:28:46.114 EOF 00:28:46.114 )") 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.114 { 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme$subsystem", 00:28:46.114 "trtype": "$TEST_TRANSPORT", 00:28:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "$NVMF_PORT", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.114 "hdgst": ${hdgst:-false}, 00:28:46.114 "ddgst": ${ddgst:-false} 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 } 00:28:46.114 EOF 00:28:46.114 )") 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.114 { 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme$subsystem", 00:28:46.114 "trtype": "$TEST_TRANSPORT", 00:28:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "$NVMF_PORT", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.114 "hdgst": ${hdgst:-false}, 00:28:46.114 "ddgst": ${ddgst:-false} 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 } 00:28:46.114 EOF 00:28:46.114 )") 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.114 { 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme$subsystem", 00:28:46.114 "trtype": "$TEST_TRANSPORT", 00:28:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "$NVMF_PORT", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.114 "hdgst": ${hdgst:-false}, 00:28:46.114 "ddgst": ${ddgst:-false} 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 } 00:28:46.114 EOF 00:28:46.114 )") 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:46.114 00:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme1", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme2", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme3", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme4", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme5", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme6", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme7", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme8", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme9", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 },{ 00:28:46.114 "params": { 00:28:46.114 "name": "Nvme10", 00:28:46.114 "trtype": "tcp", 00:28:46.114 "traddr": "10.0.0.2", 00:28:46.114 "adrfam": "ipv4", 00:28:46.114 "trsvcid": "4420", 00:28:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:46.114 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:46.114 "hdgst": false, 00:28:46.114 "ddgst": false 00:28:46.114 }, 00:28:46.114 "method": "bdev_nvme_attach_controller" 00:28:46.114 }' 00:28:46.114 [2024-12-10 00:10:30.484517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.114 [2024-12-10 00:10:30.523731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.488 Running I/O for 10 seconds... 00:28:47.488 00:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.488 00:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:47.488 00:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:47.488 00:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.488 00:10:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.746 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.747 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:47.747 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:47.747 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 485690 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 485690 ']' 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 485690 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485690 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485690' 00:28:48.005 killing process with pid 485690 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 485690 00:28:48.005 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 485690 00:28:48.263 Received shutdown signal, test time was about 0.680036 seconds 00:28:48.263 00:28:48.263 Latency(us) 00:28:48.263 [2024-12-09T23:10:32.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.263 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme1n1 : 0.62 310.78 19.42 0.00 0.00 201652.91 23802.68 196293.43 00:28:48.263 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme2n1 : 0.64 299.13 18.70 0.00 0.00 205472.84 16357.79 207198.62 00:28:48.263 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme3n1 : 0.63 304.69 19.04 0.00 0.00 196650.87 18035.51 203004.31 00:28:48.263 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme4n1 : 0.64 302.05 18.88 0.00 0.00 192843.23 14994.64 203004.31 00:28:48.263 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme5n1 : 0.68 282.60 17.66 0.00 0.00 190479.56 18140.36 209715.20 00:28:48.263 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme6n1 : 0.63 302.40 18.90 0.00 0.00 183174.21 29150.41 189582.54 00:28:48.263 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.263 Verification LBA range: start 0x0 length 0x400 00:28:48.263 Nvme7n1 : 0.63 306.88 19.18 0.00 0.00 174843.77 14050.92 198810.01 00:28:48.264 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.264 Verification LBA range: start 0x0 length 0x400 00:28:48.264 Nvme8n1 : 0.62 309.16 19.32 0.00 0.00 168099.02 14680.06 204682.04 00:28:48.264 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.264 Verification LBA range: start 0x0 length 0x400 00:28:48.264 Nvme9n1 : 0.61 210.11 13.13 0.00 0.00 239238.35 30198.99 213070.64 00:28:48.264 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.264 Verification LBA range: start 0x0 length 0x400 00:28:48.264 Nvme10n1 : 0.61 208.93 13.06 0.00 0.00 233589.56 17825.79 228170.14 00:28:48.264 [2024-12-09T23:10:32.737Z] =================================================================================================================== 00:28:48.264 [2024-12-09T23:10:32.737Z] Total : 2836.74 177.30 0.00 0.00 195903.75 14050.92 228170.14 00:28:48.264 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.636 rmmod nvme_tcp 00:28:49.636 rmmod nvme_fabrics 00:28:49.636 rmmod nvme_keyring 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 485409 ']' 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 485409 ']' 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485409' 00:28:49.636 killing process with pid 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 485409 00:28:49.636 00:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 485409 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.896 00:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.433 00:28:52.433 real 0m7.866s 00:28:52.433 user 0m23.150s 00:28:52.433 sys 0m1.559s 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.433 ************************************ 00:28:52.433 END TEST nvmf_shutdown_tc2 00:28:52.433 ************************************ 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:52.433 ************************************ 00:28:52.433 START TEST nvmf_shutdown_tc3 00:28:52.433 ************************************ 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:52.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:52.433 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.433 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:52.434 Found net devices under 0000:af:00.0: cvl_0_0 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:52.434 Found net devices under 0000:af:00.1: cvl_0_1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:28:52.434 00:28:52.434 --- 10.0.0.2 ping statistics --- 00:28:52.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.434 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:52.434 00:28:52.434 --- 10.0.0.1 ping statistics --- 00:28:52.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.434 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=486888 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 486888 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 486888 ']' 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.434 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.434 [2024-12-10 00:10:36.862903] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:52.434 [2024-12-10 00:10:36.862955] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.698 [2024-12-10 00:10:36.955919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.698 [2024-12-10 00:10:36.997562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.698 [2024-12-10 00:10:36.997602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.698 [2024-12-10 00:10:36.997612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.698 [2024-12-10 00:10:36.997622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.698 [2024-12-10 00:10:36.997629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.698 [2024-12-10 00:10:36.999422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.698 [2024-12-10 00:10:36.999531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.698 [2024-12-10 00:10:36.999637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.698 [2024-12-10 00:10:36.999638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.264 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.264 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:53.264 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.264 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.265 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.529 [2024-12-10 00:10:37.746262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.529 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.529 Malloc1 00:28:53.529 [2024-12-10 00:10:37.886602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.529 Malloc2 00:28:53.529 Malloc3 00:28:53.529 Malloc4 00:28:53.790 Malloc5 00:28:53.790 Malloc6 00:28:53.790 Malloc7 00:28:53.790 Malloc8 00:28:53.790 Malloc9 00:28:53.790 Malloc10 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=487165 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 487165 /var/tmp/bdevperf.sock 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 487165 ']' 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.048 { 00:28:54.048 "params": { 00:28:54.048 "name": "Nvme$subsystem", 00:28:54.048 "trtype": "$TEST_TRANSPORT", 00:28:54.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.048 "adrfam": "ipv4", 00:28:54.048 "trsvcid": "$NVMF_PORT", 00:28:54.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.048 "hdgst": ${hdgst:-false}, 00:28:54.048 "ddgst": ${ddgst:-false} 00:28:54.048 }, 00:28:54.048 "method": "bdev_nvme_attach_controller" 00:28:54.048 } 00:28:54.048 EOF 00:28:54.048 )") 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.048 { 00:28:54.048 "params": { 00:28:54.048 "name": "Nvme$subsystem", 00:28:54.048 "trtype": "$TEST_TRANSPORT", 00:28:54.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.048 "adrfam": "ipv4", 00:28:54.048 "trsvcid": "$NVMF_PORT", 00:28:54.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.048 "hdgst": ${hdgst:-false}, 00:28:54.048 "ddgst": ${ddgst:-false} 00:28:54.048 }, 00:28:54.048 "method": "bdev_nvme_attach_controller" 00:28:54.048 } 00:28:54.048 EOF 00:28:54.048 )") 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.048 { 00:28:54.048 "params": { 00:28:54.048 "name": "Nvme$subsystem", 00:28:54.048 "trtype": "$TEST_TRANSPORT", 00:28:54.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.048 "adrfam": "ipv4", 00:28:54.048 "trsvcid": "$NVMF_PORT", 00:28:54.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.048 "hdgst": ${hdgst:-false}, 00:28:54.048 "ddgst": ${ddgst:-false} 00:28:54.048 }, 00:28:54.048 "method": "bdev_nvme_attach_controller" 00:28:54.048 } 00:28:54.048 EOF 00:28:54.048 )") 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.048 { 00:28:54.048 "params": { 00:28:54.048 "name": "Nvme$subsystem", 00:28:54.048 "trtype": "$TEST_TRANSPORT", 00:28:54.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.048 "adrfam": "ipv4", 00:28:54.048 "trsvcid": "$NVMF_PORT", 00:28:54.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.048 "hdgst": ${hdgst:-false}, 00:28:54.048 "ddgst": ${ddgst:-false} 00:28:54.048 }, 00:28:54.048 "method": "bdev_nvme_attach_controller" 00:28:54.048 } 00:28:54.048 EOF 00:28:54.048 )") 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.048 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.048 { 00:28:54.048 "params": { 00:28:54.048 "name": "Nvme$subsystem", 00:28:54.048 "trtype": "$TEST_TRANSPORT", 00:28:54.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.048 "adrfam": "ipv4", 00:28:54.048 "trsvcid": "$NVMF_PORT", 00:28:54.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.049 { 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme$subsystem", 00:28:54.049 "trtype": "$TEST_TRANSPORT", 00:28:54.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "$NVMF_PORT", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 [2024-12-10 00:10:38.374529] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:28:54.049 [2024-12-10 00:10:38.374579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487165 ] 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.049 { 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme$subsystem", 00:28:54.049 "trtype": "$TEST_TRANSPORT", 00:28:54.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "$NVMF_PORT", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.049 { 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme$subsystem", 00:28:54.049 "trtype": "$TEST_TRANSPORT", 00:28:54.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "$NVMF_PORT", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.049 { 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme$subsystem", 00:28:54.049 "trtype": "$TEST_TRANSPORT", 00:28:54.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "$NVMF_PORT", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.049 { 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme$subsystem", 00:28:54.049 "trtype": "$TEST_TRANSPORT", 00:28:54.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "$NVMF_PORT", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.049 "hdgst": ${hdgst:-false}, 00:28:54.049 "ddgst": ${ddgst:-false} 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 } 00:28:54.049 EOF 00:28:54.049 )") 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:54.049 00:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme1", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme2", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme3", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme4", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme5", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme6", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme7", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.049 "method": "bdev_nvme_attach_controller" 00:28:54.049 },{ 00:28:54.049 "params": { 00:28:54.049 "name": "Nvme8", 00:28:54.049 "trtype": "tcp", 00:28:54.049 "traddr": "10.0.0.2", 00:28:54.049 "adrfam": "ipv4", 00:28:54.049 "trsvcid": "4420", 00:28:54.049 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.049 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.049 "hdgst": false, 00:28:54.049 "ddgst": false 00:28:54.049 }, 00:28:54.050 "method": "bdev_nvme_attach_controller" 00:28:54.050 },{ 00:28:54.050 "params": { 00:28:54.050 "name": "Nvme9", 00:28:54.050 "trtype": "tcp", 00:28:54.050 "traddr": "10.0.0.2", 00:28:54.050 "adrfam": "ipv4", 00:28:54.050 "trsvcid": "4420", 00:28:54.050 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.050 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.050 "hdgst": false, 00:28:54.050 "ddgst": false 00:28:54.050 }, 00:28:54.050 "method": "bdev_nvme_attach_controller" 00:28:54.050 },{ 00:28:54.050 "params": { 00:28:54.050 "name": "Nvme10", 00:28:54.050 "trtype": "tcp", 00:28:54.050 "traddr": "10.0.0.2", 00:28:54.050 "adrfam": "ipv4", 00:28:54.050 "trsvcid": "4420", 00:28:54.050 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.050 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.050 "hdgst": false, 00:28:54.050 "ddgst": false 00:28:54.050 }, 00:28:54.050 "method": "bdev_nvme_attach_controller" 00:28:54.050 }' 00:28:54.050 [2024-12-10 00:10:38.467517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.050 [2024-12-10 00:10:38.506420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.422 Running I/O for 10 seconds... 00:28:55.422 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.422 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:55.422 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.422 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.422 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.680 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.680 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.680 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:55.680 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:55.680 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=83 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 83 -ge 100 ']' 00:28:55.938 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 486888 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 486888 ']' 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 486888 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.196 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486888 00:28:56.470 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.470 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.470 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486888' 00:28:56.470 killing process with pid 486888 00:28:56.470 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 486888 00:28:56.470 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 486888 00:28:56.470 [2024-12-10 00:10:40.712967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.470 [2024-12-10 00:10:40.713310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.713569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2020 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.714714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade7a0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.471 [2024-12-10 00:10:40.715935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.715997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.716058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d24f0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.717744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d29c0 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.472 [2024-12-10 00:10:40.719312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.719808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3230 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.473 [2024-12-10 00:10:40.720871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.720993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.721179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d3700 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ac50 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ac50 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ac50 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ac50 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ac50 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.474 [2024-12-10 00:10:40.722931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.474 [2024-12-10 00:10:40.722949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with t[2024-12-10 00:10:40.722958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:28:56.474 id:0 cdw10:00000000 cdw11:00000000 00:28:56.474 [2024-12-10 00:10:40.722969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.474 [2024-12-10 00:10:40.722970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 00:10:40.722981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:56.474 he state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.722990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:10:40.722991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.474 he state(6) to be set 00:28:56.474 [2024-12-10 00:10:40.723002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with t[2024-12-10 00:10:40.723002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:28:56.475 id:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188450 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-10 00:10:40.723086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 he state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:10:40.723096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 he state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with t[2024-12-10 00:10:40.723109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:28:56.475 id:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd110 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 00:10:40.723221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 he state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:10:40.723231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 he state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with t[2024-12-10 00:10:40.723243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:28:56.475 id:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bbde0 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-10 00:10:40.723316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86b120 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 he state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d6ef0 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d1590 is same with the state(6) to be set 00:28:56.475 [2024-12-10 00:10:40.723504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.475 [2024-12-10 00:10:40.723533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.475 [2024-12-10 00:10:40.723543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b2dc0 is same with the state(6) to be set 00:28:56.476 [2024-12-10 00:10:40.723606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188250 is same with the state(6) to be set 00:28:56.476 [2024-12-10 00:10:40.723707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2193f80 is same with the state(6) to be set 00:28:56.476 [2024-12-10 00:10:40.723808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2194410 is same with the state(6) to be set 00:28:56.476 [2024-12-10 00:10:40.723920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.476 [2024-12-10 00:10:40.723989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.723998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25da660 is same with the state(6) to be set 00:28:56.476 [2024-12-10 00:10:40.724364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.476 [2024-12-10 00:10:40.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.476 [2024-12-10 00:10:40.724792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.724985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.724994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.477 [2024-12-10 00:10:40.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.477 [2024-12-10 00:10:40.725573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.725593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.725613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.725633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.725652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.725672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.725705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.478 [2024-12-10 00:10:40.736185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.478 [2024-12-10 00:10:40.736899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.478 [2024-12-10 00:10:40.736911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.736920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.736931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.736940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.736951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.736960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.736971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.736980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.736990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.736999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.479 [2024-12-10 00:10:40.737607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.479 [2024-12-10 00:10:40.737698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.479 [2024-12-10 00:10:40.737707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.737980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.737991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.480 [2024-12-10 00:10:40.738489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.480 [2024-12-10 00:10:40.738499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.738934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.739049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188450 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd110 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bbde0 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d6ef0 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d1590 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b2dc0 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188250 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2193f80 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2194410 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.739210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25da660 (9): Bad file descriptor 00:28:56.481 [2024-12-10 00:10:40.740357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.481 [2024-12-10 00:10:40.740670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.481 [2024-12-10 00:10:40.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.740984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.740993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.741362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.741373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.482 [2024-12-10 00:10:40.746665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.482 [2024-12-10 00:10:40.746677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.746854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.746863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.749942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:56.483 [2024-12-10 00:10:40.750026] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:56.483 [2024-12-10 00:10:40.750051] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:56.483 [2024-12-10 00:10:40.750083] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:56.483 [2024-12-10 00:10:40.751043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:56.483 [2024-12-10 00:10:40.751071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:56.483 [2024-12-10 00:10:40.751238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-12-10 00:10:40.751257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25b2dc0 with addr=10.0.0.2, port=4420 00:28:56.483 [2024-12-10 00:10:40.751269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b2dc0 is same with the state(6) to be set 00:28:56.483 [2024-12-10 00:10:40.751318] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:56.483 [2024-12-10 00:10:40.751369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.483 [2024-12-10 00:10:40.751802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.483 [2024-12-10 00:10:40.751813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.751989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.751999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.484 [2024-12-10 00:10:40.752610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.484 [2024-12-10 00:10:40.752621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.752684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.752705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.752725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.752735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.753986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.753997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.485 [2024-12-10 00:10:40.754495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.485 [2024-12-10 00:10:40.754504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.754987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.754998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.755125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.486 [2024-12-10 00:10:40.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.486 [2024-12-10 00:10:40.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.756994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.487 [2024-12-10 00:10:40.757156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.487 [2024-12-10 00:10:40.757166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.757494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.488 [2024-12-10 00:10:40.758847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.488 [2024-12-10 00:10:40.758859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.758984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.758993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.489 [2024-12-10 00:10:40.759679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.489 [2024-12-10 00:10:40.759689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.759710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.759730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.759750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.759792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.759802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.490 [2024-12-10 00:10:40.761662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.490 [2024-12-10 00:10:40.761672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.761985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.761995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.762341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.491 [2024-12-10 00:10:40.762350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.491 [2024-12-10 00:10:40.763300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:56.491 [2024-12-10 00:10:40.763323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:56.491 [2024-12-10 00:10:40.763337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:56.491 [2024-12-10 00:10:40.763350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:56.491 [2024-12-10 00:10:40.763650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-12-10 00:10:40.763668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25da660 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-12-10 00:10:40.763680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25da660 is same with the state(6) to be set 00:28:56.491 [2024-12-10 00:10:40.763926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-12-10 00:10:40.763942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d1590 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-12-10 00:10:40.763952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d1590 is same with the state(6) to be set 00:28:56.491 [2024-12-10 00:10:40.763966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b2dc0 (9): Bad file descriptor 00:28:56.491 [2024-12-10 00:10:40.763999] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:56.491 [2024-12-10 00:10:40.764014] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:56.492 [2024-12-10 00:10:40.764032] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:56.492 [2024-12-10 00:10:40.764046] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:56.492 [2024-12-10 00:10:40.764059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d1590 (9): Bad file descriptor 00:28:56.492 [2024-12-10 00:10:40.764075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25da660 (9): Bad file descriptor 00:28:56.492 [2024-12-10 00:10:40.764121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.492 [2024-12-10 00:10:40.764888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.492 [2024-12-10 00:10:40.764899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.764910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.764925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.764935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.764946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.764967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.764976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.764989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.493 [2024-12-10 00:10:40.765480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.493 [2024-12-10 00:10:40.765491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23982d0 is same with the state(6) to be set 00:28:56.493 [2024-12-10 00:10:40.766498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:56.493 [2024-12-10 00:10:40.766524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:56.493 task offset: 33408 on job bdev=Nvme4n1 fails 00:28:56.493 00:28:56.493 Latency(us) 00:28:56.493 [2024-12-09T23:10:40.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme1n1 ended in about 0.93 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme1n1 : 0.93 210.65 13.17 68.78 0.00 226872.32 19922.94 204682.04 00:28:56.493 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme2n1 : 0.92 209.21 13.08 69.74 0.00 223476.33 16777.22 211392.92 00:28:56.493 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme3n1 ended in about 0.92 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme3n1 : 0.92 214.10 13.38 69.56 0.00 216066.62 12006.20 206359.76 00:28:56.493 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme4n1 ended in about 0.90 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme4n1 : 0.90 283.07 17.69 70.77 0.00 170040.61 12897.48 209715.20 00:28:56.493 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme5n1 ended in about 0.92 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme5n1 : 0.92 208.14 13.01 69.38 0.00 213348.76 17930.65 206359.76 00:28:56.493 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme6n1 ended in about 0.92 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme6n1 : 0.92 207.62 12.98 69.21 0.00 210147.33 16882.07 223136.97 00:28:56.493 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme7n1 ended in about 0.91 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.493 Nvme7n1 : 0.91 279.00 17.44 70.02 0.00 163394.68 11901.34 207198.62 00:28:56.493 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.493 Job: Nvme8n1 ended in about 0.93 seconds with error 00:28:56.493 Verification LBA range: start 0x0 length 0x400 00:28:56.494 Nvme8n1 : 0.93 207.05 12.94 69.02 0.00 203235.94 14260.63 207198.62 00:28:56.494 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.494 Job: Nvme9n1 ended in about 0.91 seconds with error 00:28:56.494 Verification LBA range: start 0x0 length 0x400 00:28:56.494 Nvme9n1 : 0.91 210.53 13.16 70.18 0.00 195667.56 31876.71 213070.64 00:28:56.494 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.494 Job: Nvme10n1 ended in about 0.91 seconds with error 00:28:56.494 Verification LBA range: start 0x0 length 0x400 00:28:56.494 Nvme10n1 : 0.91 210.34 13.15 70.11 0.00 192184.52 18559.80 229847.86 00:28:56.494 [2024-12-09T23:10:40.967Z] =================================================================================================================== 00:28:56.494 [2024-12-09T23:10:40.967Z] Total : 2239.73 139.98 696.77 0.00 199873.72 11901.34 229847.86 00:28:56.494 [2024-12-10 00:10:40.793820] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:56.494 [2024-12-10 00:10:40.793874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:56.494 [2024-12-10 00:10:40.794122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.794143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25bbde0 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.794157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bbde0 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.794312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.794326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188450 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.794336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188450 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.794461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.794474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2193f80 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.794483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2193f80 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.794682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.794696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188250 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.794706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188250 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.794719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.794729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.794741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.794752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.796174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.796197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fd110 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.796208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd110 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.796369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.796381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d6ef0 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.796390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d6ef0 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.796588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.796601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2194410 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.796610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2194410 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.796625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bbde0 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.796640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188450 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.796652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2193f80 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.796663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188250 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.796674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.796683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.796693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.796707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.796717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.796725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.796735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.796743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.796795] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:56.494 [2024-12-10 00:10:40.796810] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:56.494 [2024-12-10 00:10:40.796855] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:56.494 [2024-12-10 00:10:40.796870] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:56.494 [2024-12-10 00:10:40.797164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd110 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.797179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d6ef0 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.797190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2194410 (9): Bad file descriptor 00:28:56.494 [2024-12-10 00:10:40.797201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:56.494 [2024-12-10 00:10:40.797422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:56.494 [2024-12-10 00:10:40.797432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:56.494 [2024-12-10 00:10:40.797461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:56.494 [2024-12-10 00:10:40.797542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:56.494 [2024-12-10 00:10:40.797550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:56.494 [2024-12-10 00:10:40.797559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:56.494 [2024-12-10 00:10:40.797840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.797857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25b2dc0 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.797867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b2dc0 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.798084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-12-10 00:10:40.798097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d1590 with addr=10.0.0.2, port=4420 00:28:56.494 [2024-12-10 00:10:40.798107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d1590 is same with the state(6) to be set 00:28:56.494 [2024-12-10 00:10:40.798239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-12-10 00:10:40.798252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25da660 with addr=10.0.0.2, port=4420 00:28:56.495 [2024-12-10 00:10:40.798261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25da660 is same with the state(6) to be set 00:28:56.495 [2024-12-10 00:10:40.798291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b2dc0 (9): Bad file descriptor 00:28:56.495 [2024-12-10 00:10:40.798305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d1590 (9): Bad file descriptor 00:28:56.495 [2024-12-10 00:10:40.798317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25da660 (9): Bad file descriptor 00:28:56.495 [2024-12-10 00:10:40.798346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:56.495 [2024-12-10 00:10:40.798356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:56.495 [2024-12-10 00:10:40.798369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:56.495 [2024-12-10 00:10:40.798378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:56.495 [2024-12-10 00:10:40.798388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:56.495 [2024-12-10 00:10:40.798397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:56.495 [2024-12-10 00:10:40.798405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:56.495 [2024-12-10 00:10:40.798413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:56.495 [2024-12-10 00:10:40.798423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:56.495 [2024-12-10 00:10:40.798433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:56.495 [2024-12-10 00:10:40.798442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:56.495 [2024-12-10 00:10:40.798449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:56.754 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 487165 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 487165 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 487165 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:57.692 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.693 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:57.693 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.693 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:57.693 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.693 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.693 rmmod nvme_tcp 00:28:57.952 rmmod nvme_fabrics 00:28:57.952 rmmod nvme_keyring 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 486888 ']' 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 486888 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 486888 ']' 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 486888 00:28:57.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (486888) - No such process 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 486888 is not found' 00:28:57.952 Process with pid 486888 is not found 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.952 00:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.862 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.862 00:28:59.862 real 0m7.874s 00:28:59.862 user 0m18.961s 00:28:59.862 sys 0m1.573s 00:28:59.862 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.862 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.862 ************************************ 00:28:59.862 END TEST nvmf_shutdown_tc3 00:28:59.862 ************************************ 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.121 ************************************ 00:29:00.121 START TEST nvmf_shutdown_tc4 00:29:00.121 ************************************ 00:29:00.121 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.122 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.122 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.122 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.123 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:29:00.382 00:29:00.382 --- 10.0.0.2 ping statistics --- 00:29:00.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.382 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:29:00.382 00:29:00.382 --- 10.0.0.1 ping statistics --- 00:29:00.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.382 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=488356 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 488356 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 488356 ']' 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.382 00:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:00.382 [2024-12-10 00:10:44.829556] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:00.382 [2024-12-10 00:10:44.829607] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.641 [2024-12-10 00:10:44.923915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.641 [2024-12-10 00:10:44.962925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.641 [2024-12-10 00:10:44.962965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.641 [2024-12-10 00:10:44.962975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.641 [2024-12-10 00:10:44.962984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.641 [2024-12-10 00:10:44.962991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.641 [2024-12-10 00:10:44.964584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.641 [2024-12-10 00:10:44.964691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.641 [2024-12-10 00:10:44.964802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.641 [2024-12-10 00:10:44.964803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.207 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.207 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:01.207 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.208 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.208 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.466 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.467 [2024-12-10 00:10:45.699510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.467 00:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.467 Malloc1 00:29:01.467 [2024-12-10 00:10:45.827433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.467 Malloc2 00:29:01.467 Malloc3 00:29:01.467 Malloc4 00:29:01.725 Malloc5 00:29:01.725 Malloc6 00:29:01.725 Malloc7 00:29:01.725 Malloc8 00:29:01.725 Malloc9 00:29:01.725 Malloc10 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=488622 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:01.983 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:01.983 [2024-12-10 00:10:46.344294] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 488356 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 488356 ']' 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 488356 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488356 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488356' 00:29:07.258 killing process with pid 488356 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 488356 00:29:07.258 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 488356 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.357364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3650 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3650 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3650 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3650 is same with the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.357450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3650 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.258 [2024-12-10 00:10:51.357876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.357914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 [2024-12-10 00:10:51.357923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with Write completed with error (sct=0, sc=8) 00:29:07.258 the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.357933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 [2024-12-10 00:10:51.357942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 [2024-12-10 00:10:51.357969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.357978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3b40 is same with the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.358331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.358360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.358371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.358380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.358389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 [2024-12-10 00:10:51.358399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 starting I/O failed: -6 00:29:07.258 [2024-12-10 00:10:51.358408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 [2024-12-10 00:10:51.358417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with Write completed with error (sct=0, sc=8) 00:29:07.258 the state(6) to be set 00:29:07.258 [2024-12-10 00:10:51.358427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4010 is same with the state(6) to be set 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 starting I/O failed: -6 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.258 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 [2024-12-10 00:10:51.358678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.259 [2024-12-10 00:10:51.358728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 [2024-12-10 00:10:51.358757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 [2024-12-10 00:10:51.358768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 [2024-12-10 00:10:51.358776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 [2024-12-10 00:10:51.358785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with starting I/O failed: -6 00:29:07.259 the state(6) to be set 00:29:07.259 [2024-12-10 00:10:51.358795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 [2024-12-10 00:10:51.358803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 starting I/O failed: -6 00:29:07.259 [2024-12-10 00:10:51.358812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3180 is same with the state(6) to be set 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 [2024-12-10 00:10:51.359673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.259 starting I/O failed: -6 00:29:07.259 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 [2024-12-10 00:10:51.361251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.260 NVMe io qpair process completion error 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 [2024-12-10 00:10:51.362265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 [2024-12-10 00:10:51.363141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 [2024-12-10 00:10:51.364099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.260 Write completed with error (sct=0, sc=8) 00:29:07.260 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 [2024-12-10 00:10:51.365947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.261 NVMe io qpair process completion error 00:29:07.261 [2024-12-10 00:10:51.367017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c830 is same with the state(6) to be set 00:29:07.261 [2024-12-10 00:10:51.367043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c830 is same with the state(6) to be set 00:29:07.261 [2024-12-10 00:10:51.367054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182c830 is same with the state(6) to be set 00:29:07.261 [2024-12-10 00:10:51.367785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cd00 is same with the state(6) to be set 00:29:07.261 [2024-12-10 00:10:51.367809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cd00 is same with the state(6) to be set 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 [2024-12-10 00:10:51.368499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.261 NVMe io qpair process completion error 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 starting I/O failed: -6 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.261 Write completed with error (sct=0, sc=8) 00:29:07.262 [2024-12-10 00:10:51.369470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 [2024-12-10 00:10:51.370367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 [2024-12-10 00:10:51.371360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.262 Write completed with error (sct=0, sc=8) 00:29:07.262 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 [2024-12-10 00:10:51.372922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.263 NVMe io qpair process completion error 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 [2024-12-10 00:10:51.373876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 [2024-12-10 00:10:51.374757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.263 starting I/O failed: -6 00:29:07.263 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 [2024-12-10 00:10:51.375793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 [2024-12-10 00:10:51.377535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.264 NVMe io qpair process completion error 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 Write completed with error (sct=0, sc=8) 00:29:07.264 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 [2024-12-10 00:10:51.378488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 [2024-12-10 00:10:51.379429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 [2024-12-10 00:10:51.380433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.265 Write completed with error (sct=0, sc=8) 00:29:07.265 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 [2024-12-10 00:10:51.383392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.266 NVMe io qpair process completion error 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 [2024-12-10 00:10:51.384425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 [2024-12-10 00:10:51.385316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 starting I/O failed: -6 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.266 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 [2024-12-10 00:10:51.386326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.267 Write completed with error (sct=0, sc=8) 00:29:07.267 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 [2024-12-10 00:10:51.389312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.268 NVMe io qpair process completion error 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 [2024-12-10 00:10:51.390326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 [2024-12-10 00:10:51.391241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.268 Write completed with error (sct=0, sc=8) 00:29:07.268 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 [2024-12-10 00:10:51.392233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 [2024-12-10 00:10:51.394084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.269 NVMe io qpair process completion error 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.269 starting I/O failed: -6 00:29:07.269 Write completed with error (sct=0, sc=8) 00:29:07.270 [2024-12-10 00:10:51.394993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 [2024-12-10 00:10:51.395887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 [2024-12-10 00:10:51.396898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.270 Write completed with error (sct=0, sc=8) 00:29:07.270 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 [2024-12-10 00:10:51.398674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.271 NVMe io qpair process completion error 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 [2024-12-10 00:10:51.399674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 [2024-12-10 00:10:51.400573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.271 starting I/O failed: -6 00:29:07.271 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 [2024-12-10 00:10:51.401581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 starting I/O failed: -6 00:29:07.272 [2024-12-10 00:10:51.404347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.272 NVMe io qpair process completion error 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.272 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Write completed with error (sct=0, sc=8) 00:29:07.273 Initializing NVMe Controllers 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:07.273 Controller IO queue size 128, less than required. 00:29:07.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:07.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:07.273 Initialization complete. Launching workers. 00:29:07.273 ======================================================== 00:29:07.273 Latency(us) 00:29:07.273 Device Information : IOPS MiB/s Average min max 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2321.28 99.74 55146.94 698.21 99666.41 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2315.54 99.50 55311.36 834.01 101306.76 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2322.14 99.78 55181.59 920.43 115782.20 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2318.31 99.61 55286.34 668.84 94651.89 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2350.22 100.99 54549.97 399.49 92956.89 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2258.94 97.06 56205.94 978.72 90673.49 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2278.09 97.89 55744.70 916.90 90785.50 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2240.65 96.28 57110.77 835.20 109006.77 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2277.03 97.84 55773.44 690.14 89644.19 00:29:07.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2276.82 97.83 55788.01 833.24 95090.90 00:29:07.273 ======================================================== 00:29:07.273 Total : 22959.01 986.52 55600.87 399.49 115782.20 00:29:07.273 00:29:07.273 [2024-12-10 00:10:51.412749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e7560 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e7bc0 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e8410 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e99c0 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e8740 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c6870 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9690 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.412999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e8a70 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.413031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e7890 is same with the state(6) to be set 00:29:07.273 [2024-12-10 00:10:51.413061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e7ef0 is same with the state(6) to be set 00:29:07.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:07.532 00:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 488622 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 488622 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 488622 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.470 rmmod nvme_tcp 00:29:08.470 rmmod nvme_fabrics 00:29:08.470 rmmod nvme_keyring 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 488356 ']' 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 488356 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 488356 ']' 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 488356 00:29:08.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (488356) - No such process 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 488356 is not found' 00:29:08.470 Process with pid 488356 is not found 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.470 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.006 00:29:11.006 real 0m10.527s 00:29:11.006 user 0m27.434s 00:29:11.006 sys 0m5.555s 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.006 ************************************ 00:29:11.006 END TEST nvmf_shutdown_tc4 00:29:11.006 ************************************ 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:11.006 00:29:11.006 real 0m43.656s 00:29:11.006 user 1m44.051s 00:29:11.006 sys 0m16.236s 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.006 00:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:11.006 ************************************ 00:29:11.006 END TEST nvmf_shutdown 00:29:11.006 ************************************ 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:11.006 ************************************ 00:29:11.006 START TEST nvmf_nsid 00:29:11.006 ************************************ 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:11.006 * Looking for test storage... 00:29:11.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.006 --rc genhtml_branch_coverage=1 00:29:11.006 --rc genhtml_function_coverage=1 00:29:11.006 --rc genhtml_legend=1 00:29:11.006 --rc geninfo_all_blocks=1 00:29:11.006 --rc geninfo_unexecuted_blocks=1 00:29:11.006 00:29:11.006 ' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.006 --rc genhtml_branch_coverage=1 00:29:11.006 --rc genhtml_function_coverage=1 00:29:11.006 --rc genhtml_legend=1 00:29:11.006 --rc geninfo_all_blocks=1 00:29:11.006 --rc geninfo_unexecuted_blocks=1 00:29:11.006 00:29:11.006 ' 00:29:11.006 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:11.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.006 --rc genhtml_branch_coverage=1 00:29:11.006 --rc genhtml_function_coverage=1 00:29:11.006 --rc genhtml_legend=1 00:29:11.007 --rc geninfo_all_blocks=1 00:29:11.007 --rc geninfo_unexecuted_blocks=1 00:29:11.007 00:29:11.007 ' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:11.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.007 --rc genhtml_branch_coverage=1 00:29:11.007 --rc genhtml_function_coverage=1 00:29:11.007 --rc genhtml_legend=1 00:29:11.007 --rc geninfo_all_blocks=1 00:29:11.007 --rc geninfo_unexecuted_blocks=1 00:29:11.007 00:29:11.007 ' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.007 00:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.132 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.132 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.132 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.132 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.132 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:29:19.133 00:29:19.133 --- 10.0.0.2 ping statistics --- 00:29:19.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.133 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:19.133 00:29:19.133 --- 10.0.0.1 ping statistics --- 00:29:19.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.133 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=493401 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 493401 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 493401 ']' 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.133 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.133 [2024-12-10 00:11:02.613341] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:19.133 [2024-12-10 00:11:02.613387] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.133 [2024-12-10 00:11:02.706234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.133 [2024-12-10 00:11:02.745704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.133 [2024-12-10 00:11:02.745741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.133 [2024-12-10 00:11:02.745750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.133 [2024-12-10 00:11:02.745758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.133 [2024-12-10 00:11:02.745765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.133 [2024-12-10 00:11:02.746372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=493593 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7bc84726-f7f9-48e0-bed8-cc50b66c275f 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=db8b75c4-a1a5-49c7-be10-b6c2d0af93b2 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0f94c1dd-ce2e-4870-a587-15d872e575ee 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.133 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.133 null0 00:29:19.133 null1 00:29:19.133 [2024-12-10 00:11:03.550130] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:19.133 [2024-12-10 00:11:03.550178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493593 ] 00:29:19.133 null2 00:29:19.133 [2024-12-10 00:11:03.556698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.133 [2024-12-10 00:11:03.580929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 493593 /var/tmp/tgt2.sock 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 493593 ']' 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:19.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.392 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:19.392 [2024-12-10 00:11:03.639779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.392 [2024-12-10 00:11:03.679035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.649 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.649 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:19.649 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:19.913 [2024-12-10 00:11:04.203952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.913 [2024-12-10 00:11:04.220083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:19.913 nvme0n1 nvme0n2 00:29:19.913 nvme1n1 00:29:19.913 00:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:19.913 00:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:19.913 00:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:21.294 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:21.295 00:11:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7bc84726-f7f9-48e0-bed8-cc50b66c275f 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7bc84726f7f948e0bed8cc50b66c275f 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7BC84726F7F948E0BED8CC50B66C275F 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7BC84726F7F948E0BED8CC50B66C275F == \7\B\C\8\4\7\2\6\F\7\F\9\4\8\E\0\B\E\D\8\C\C\5\0\B\6\6\C\2\7\5\F ]] 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid db8b75c4-a1a5-49c7-be10-b6c2d0af93b2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=db8b75c4a1a549c7be10b6c2d0af93b2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DB8B75C4A1A549C7BE10B6C2D0AF93B2 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DB8B75C4A1A549C7BE10B6C2D0AF93B2 == \D\B\8\B\7\5\C\4\A\1\A\5\4\9\C\7\B\E\1\0\B\6\C\2\D\0\A\F\9\3\B\2 ]] 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:22.229 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0f94c1dd-ce2e-4870-a587-15d872e575ee 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0f94c1ddce2e4870a58715d872e575ee 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0F94C1DDCE2E4870A58715D872E575EE 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0F94C1DDCE2E4870A58715D872E575EE == \0\F\9\4\C\1\D\D\C\E\2\E\4\8\7\0\A\5\8\7\1\5\D\8\7\2\E\5\7\5\E\E ]] 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 493593 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 493593 ']' 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 493593 00:29:22.487 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:22.746 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.746 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493593 00:29:22.746 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:22.746 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:22.746 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493593' 00:29:22.746 killing process with pid 493593 00:29:22.746 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 493593 00:29:22.746 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 493593 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.007 rmmod nvme_tcp 00:29:23.007 rmmod nvme_fabrics 00:29:23.007 rmmod nvme_keyring 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 493401 ']' 00:29:23.007 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 493401 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 493401 ']' 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 493401 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493401 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493401' 00:29:23.008 killing process with pid 493401 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 493401 00:29:23.008 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 493401 00:29:23.267 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.268 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.804 00:29:25.804 real 0m14.649s 00:29:25.804 user 0m11.093s 00:29:25.804 sys 0m6.915s 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:25.804 ************************************ 00:29:25.804 END TEST nvmf_nsid 00:29:25.804 ************************************ 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:25.804 00:29:25.804 real 12m57.250s 00:29:25.804 user 26m36.675s 00:29:25.804 sys 4m31.578s 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.804 00:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:25.804 ************************************ 00:29:25.804 END TEST nvmf_target_extra 00:29:25.804 ************************************ 00:29:25.804 00:11:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:25.804 00:11:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.804 00:11:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.804 00:11:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.804 ************************************ 00:29:25.804 START TEST nvmf_host 00:29:25.804 ************************************ 00:29:25.804 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:25.804 * Looking for test storage... 00:29:25.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:25.804 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:25.804 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:25.804 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.804 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.805 ************************************ 00:29:25.805 START TEST nvmf_multicontroller 00:29:25.805 ************************************ 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:25.805 * Looking for test storage... 00:29:25.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:25.805 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.065 --rc genhtml_branch_coverage=1 00:29:26.065 --rc genhtml_function_coverage=1 00:29:26.065 --rc genhtml_legend=1 00:29:26.065 --rc geninfo_all_blocks=1 00:29:26.065 --rc geninfo_unexecuted_blocks=1 00:29:26.065 00:29:26.065 ' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.065 --rc genhtml_branch_coverage=1 00:29:26.065 --rc genhtml_function_coverage=1 00:29:26.065 --rc genhtml_legend=1 00:29:26.065 --rc geninfo_all_blocks=1 00:29:26.065 --rc geninfo_unexecuted_blocks=1 00:29:26.065 00:29:26.065 ' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.065 --rc genhtml_branch_coverage=1 00:29:26.065 --rc genhtml_function_coverage=1 00:29:26.065 --rc genhtml_legend=1 00:29:26.065 --rc geninfo_all_blocks=1 00:29:26.065 --rc geninfo_unexecuted_blocks=1 00:29:26.065 00:29:26.065 ' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.065 --rc genhtml_branch_coverage=1 00:29:26.065 --rc genhtml_function_coverage=1 00:29:26.065 --rc genhtml_legend=1 00:29:26.065 --rc geninfo_all_blocks=1 00:29:26.065 --rc geninfo_unexecuted_blocks=1 00:29:26.065 00:29:26.065 ' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.065 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:26.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.066 00:11:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:34.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:34.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:34.198 Found net devices under 0000:af:00.0: cvl_0_0 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:34.198 Found net devices under 0000:af:00.1: cvl_0_1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.198 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:29:34.198 00:29:34.198 --- 10.0.0.2 ping statistics --- 00:29:34.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.199 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:29:34.199 00:29:34.199 --- 10.0.0.1 ping statistics --- 00:29:34.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.199 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=498163 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 498163 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 498163 ']' 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.199 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.199 [2024-12-10 00:11:17.735137] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:34.199 [2024-12-10 00:11:17.735193] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.199 [2024-12-10 00:11:17.830350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:34.199 [2024-12-10 00:11:17.872175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.199 [2024-12-10 00:11:17.872212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.199 [2024-12-10 00:11:17.872224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.199 [2024-12-10 00:11:17.872233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.199 [2024-12-10 00:11:17.872240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.199 [2024-12-10 00:11:17.873716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.199 [2024-12-10 00:11:17.873861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.199 [2024-12-10 00:11:17.873862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.199 [2024-12-10 00:11:18.628895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.199 Malloc0 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.199 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 [2024-12-10 00:11:18.690502] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 [2024-12-10 00:11:18.698439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 Malloc1 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=498327 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 498327 /var/tmp/bdevperf.sock 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 498327 ']' 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.456 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.713 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.713 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:34.713 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:34.713 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.713 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.971 NVMe0n1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.971 1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.971 request: 00:29:34.971 { 00:29:34.971 "name": "NVMe0", 00:29:34.971 "trtype": "tcp", 00:29:34.971 "traddr": "10.0.0.2", 00:29:34.971 "adrfam": "ipv4", 00:29:34.971 "trsvcid": "4420", 00:29:34.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.971 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:34.971 "hostaddr": "10.0.0.1", 00:29:34.971 "prchk_reftag": false, 00:29:34.971 "prchk_guard": false, 00:29:34.971 "hdgst": false, 00:29:34.971 "ddgst": false, 00:29:34.971 "allow_unrecognized_csi": false, 00:29:34.971 "method": "bdev_nvme_attach_controller", 00:29:34.971 "req_id": 1 00:29:34.971 } 00:29:34.971 Got JSON-RPC error response 00:29:34.971 response: 00:29:34.971 { 00:29:34.971 "code": -114, 00:29:34.971 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.971 } 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.971 request: 00:29:34.971 { 00:29:34.971 "name": "NVMe0", 00:29:34.971 "trtype": "tcp", 00:29:34.971 "traddr": "10.0.0.2", 00:29:34.971 "adrfam": "ipv4", 00:29:34.971 "trsvcid": "4420", 00:29:34.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:34.971 "hostaddr": "10.0.0.1", 00:29:34.971 "prchk_reftag": false, 00:29:34.971 "prchk_guard": false, 00:29:34.971 "hdgst": false, 00:29:34.971 "ddgst": false, 00:29:34.971 "allow_unrecognized_csi": false, 00:29:34.971 "method": "bdev_nvme_attach_controller", 00:29:34.971 "req_id": 1 00:29:34.971 } 00:29:34.971 Got JSON-RPC error response 00:29:34.971 response: 00:29:34.971 { 00:29:34.971 "code": -114, 00:29:34.971 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.971 } 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:34.971 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.972 request: 00:29:34.972 { 00:29:34.972 "name": "NVMe0", 00:29:34.972 "trtype": "tcp", 00:29:34.972 "traddr": "10.0.0.2", 00:29:34.972 "adrfam": "ipv4", 00:29:34.972 "trsvcid": "4420", 00:29:34.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.972 "hostaddr": "10.0.0.1", 00:29:34.972 "prchk_reftag": false, 00:29:34.972 "prchk_guard": false, 00:29:34.972 "hdgst": false, 00:29:34.972 "ddgst": false, 00:29:34.972 "multipath": "disable", 00:29:34.972 "allow_unrecognized_csi": false, 00:29:34.972 "method": "bdev_nvme_attach_controller", 00:29:34.972 "req_id": 1 00:29:34.972 } 00:29:34.972 Got JSON-RPC error response 00:29:34.972 response: 00:29:34.972 { 00:29:34.972 "code": -114, 00:29:34.972 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:34.972 } 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.972 request: 00:29:34.972 { 00:29:34.972 "name": "NVMe0", 00:29:34.972 "trtype": "tcp", 00:29:34.972 "traddr": "10.0.0.2", 00:29:34.972 "adrfam": "ipv4", 00:29:34.972 "trsvcid": "4420", 00:29:34.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.972 "hostaddr": "10.0.0.1", 00:29:34.972 "prchk_reftag": false, 00:29:34.972 "prchk_guard": false, 00:29:34.972 "hdgst": false, 00:29:34.972 "ddgst": false, 00:29:34.972 "multipath": "failover", 00:29:34.972 "allow_unrecognized_csi": false, 00:29:34.972 "method": "bdev_nvme_attach_controller", 00:29:34.972 "req_id": 1 00:29:34.972 } 00:29:34.972 Got JSON-RPC error response 00:29:34.972 response: 00:29:34.972 { 00:29:34.972 "code": -114, 00:29:34.972 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:34.972 } 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.972 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.229 NVMe0n1 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.229 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.229 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.487 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.487 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:35.487 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:36.419 { 00:29:36.419 "results": [ 00:29:36.419 { 00:29:36.419 "job": "NVMe0n1", 00:29:36.419 "core_mask": "0x1", 00:29:36.419 "workload": "write", 00:29:36.419 "status": "finished", 00:29:36.419 "queue_depth": 128, 00:29:36.419 "io_size": 4096, 00:29:36.419 "runtime": 1.007619, 00:29:36.419 "iops": 25836.154340082907, 00:29:36.419 "mibps": 100.92247789094885, 00:29:36.419 "io_failed": 0, 00:29:36.419 "io_timeout": 0, 00:29:36.419 "avg_latency_us": 4947.812423431798, 00:29:36.419 "min_latency_us": 2162.688, 00:29:36.419 "max_latency_us": 9804.1856 00:29:36.419 } 00:29:36.419 ], 00:29:36.419 "core_count": 1 00:29:36.419 } 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 498327 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 498327 ']' 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 498327 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.419 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 498327 00:29:36.678 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.678 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.678 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 498327' 00:29:36.678 killing process with pid 498327 00:29:36.678 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 498327 00:29:36.678 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 498327 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:36.678 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:36.678 [2024-12-10 00:11:18.805211] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:36.678 [2024-12-10 00:11:18.805262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498327 ] 00:29:36.678 [2024-12-10 00:11:18.898191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.678 [2024-12-10 00:11:18.939278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.678 [2024-12-10 00:11:19.695531] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 6e978e84-baf7-4fbd-81ba-0143be854a22 already exists 00:29:36.678 [2024-12-10 00:11:19.695558] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:6e978e84-baf7-4fbd-81ba-0143be854a22 alias for bdev NVMe1n1 00:29:36.678 [2024-12-10 00:11:19.695568] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:36.678 Running I/O for 1 seconds... 00:29:36.678 25778.00 IOPS, 100.70 MiB/s 00:29:36.678 Latency(us) 00:29:36.678 [2024-12-09T23:11:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.678 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:36.678 NVMe0n1 : 1.01 25836.15 100.92 0.00 0.00 4947.81 2162.69 9804.19 00:29:36.678 [2024-12-09T23:11:21.151Z] =================================================================================================================== 00:29:36.678 [2024-12-09T23:11:21.151Z] Total : 25836.15 100.92 0.00 0.00 4947.81 2162.69 9804.19 00:29:36.678 Received shutdown signal, test time was about 1.000000 seconds 00:29:36.678 00:29:36.678 Latency(us) 00:29:36.678 [2024-12-09T23:11:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.678 [2024-12-09T23:11:21.151Z] =================================================================================================================== 00:29:36.678 [2024-12-09T23:11:21.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.678 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.678 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.678 rmmod nvme_tcp 00:29:36.936 rmmod nvme_fabrics 00:29:36.936 rmmod nvme_keyring 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 498163 ']' 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 498163 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 498163 ']' 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 498163 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 498163 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 498163' 00:29:36.936 killing process with pid 498163 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 498163 00:29:36.936 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 498163 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.196 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.101 00:11:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.102 00:29:39.102 real 0m13.416s 00:29:39.102 user 0m15.295s 00:29:39.102 sys 0m6.589s 00:29:39.102 00:11:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.102 00:11:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:39.102 ************************************ 00:29:39.102 END TEST nvmf_multicontroller 00:29:39.102 ************************************ 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.362 ************************************ 00:29:39.362 START TEST nvmf_aer 00:29:39.362 ************************************ 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:39.362 * Looking for test storage... 00:29:39.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.362 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:39.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.622 --rc genhtml_branch_coverage=1 00:29:39.622 --rc genhtml_function_coverage=1 00:29:39.622 --rc genhtml_legend=1 00:29:39.622 --rc geninfo_all_blocks=1 00:29:39.622 --rc geninfo_unexecuted_blocks=1 00:29:39.622 00:29:39.622 ' 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:39.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.622 --rc genhtml_branch_coverage=1 00:29:39.622 --rc genhtml_function_coverage=1 00:29:39.622 --rc genhtml_legend=1 00:29:39.622 --rc geninfo_all_blocks=1 00:29:39.622 --rc geninfo_unexecuted_blocks=1 00:29:39.622 00:29:39.622 ' 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:39.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.622 --rc genhtml_branch_coverage=1 00:29:39.622 --rc genhtml_function_coverage=1 00:29:39.622 --rc genhtml_legend=1 00:29:39.622 --rc geninfo_all_blocks=1 00:29:39.622 --rc geninfo_unexecuted_blocks=1 00:29:39.622 00:29:39.622 ' 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:39.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.622 --rc genhtml_branch_coverage=1 00:29:39.622 --rc genhtml_function_coverage=1 00:29:39.622 --rc genhtml_legend=1 00:29:39.622 --rc geninfo_all_blocks=1 00:29:39.622 --rc geninfo_unexecuted_blocks=1 00:29:39.622 00:29:39.622 ' 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.622 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.623 00:11:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.743 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.744 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.744 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.744 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.744 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:29:47.744 00:29:47.744 --- 10.0.0.2 ping statistics --- 00:29:47.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.744 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:47.744 00:29:47.744 --- 10.0.0.1 ping statistics --- 00:29:47.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.744 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=502420 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 502420 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 502420 ']' 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.744 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.744 [2024-12-10 00:11:31.215284] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:47.744 [2024-12-10 00:11:31.215334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.744 [2024-12-10 00:11:31.310479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.744 [2024-12-10 00:11:31.350285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.744 [2024-12-10 00:11:31.350323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.744 [2024-12-10 00:11:31.350332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.745 [2024-12-10 00:11:31.350342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.745 [2024-12-10 00:11:31.350349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.745 [2024-12-10 00:11:31.352064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.745 [2024-12-10 00:11:31.352173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.745 [2024-12-10 00:11:31.352309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.745 [2024-12-10 00:11:31.352309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 [2024-12-10 00:11:32.100344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 Malloc0 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 [2024-12-10 00:11:32.163897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.745 [ 00:29:47.745 { 00:29:47.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.745 "subtype": "Discovery", 00:29:47.745 "listen_addresses": [], 00:29:47.745 "allow_any_host": true, 00:29:47.745 "hosts": [] 00:29:47.745 }, 00:29:47.745 { 00:29:47.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.745 "subtype": "NVMe", 00:29:47.745 "listen_addresses": [ 00:29:47.745 { 00:29:47.745 "trtype": "TCP", 00:29:47.745 "adrfam": "IPv4", 00:29:47.745 "traddr": "10.0.0.2", 00:29:47.745 "trsvcid": "4420" 00:29:47.745 } 00:29:47.745 ], 00:29:47.745 "allow_any_host": true, 00:29:47.745 "hosts": [], 00:29:47.745 "serial_number": "SPDK00000000000001", 00:29:47.745 "model_number": "SPDK bdev Controller", 00:29:47.745 "max_namespaces": 2, 00:29:47.745 "min_cntlid": 1, 00:29:47.745 "max_cntlid": 65519, 00:29:47.745 "namespaces": [ 00:29:47.745 { 00:29:47.745 "nsid": 1, 00:29:47.745 "bdev_name": "Malloc0", 00:29:47.745 "name": "Malloc0", 00:29:47.745 "nguid": "71E032D631044D5CA7DA0623B4DF65E4", 00:29:47.745 "uuid": "71e032d6-3104-4d5c-a7da-0623b4df65e4" 00:29:47.745 } 00:29:47.745 ] 00:29:47.745 } 00:29:47.745 ] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=502701 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:47.745 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.003 Malloc1 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.003 Asynchronous Event Request test 00:29:48.003 Attaching to 10.0.0.2 00:29:48.003 Attached to 10.0.0.2 00:29:48.003 Registering asynchronous event callbacks... 00:29:48.003 Starting namespace attribute notice tests for all controllers... 00:29:48.003 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:48.003 aer_cb - Changed Namespace 00:29:48.003 Cleaning up... 00:29:48.003 [ 00:29:48.003 { 00:29:48.003 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:48.003 "subtype": "Discovery", 00:29:48.003 "listen_addresses": [], 00:29:48.003 "allow_any_host": true, 00:29:48.003 "hosts": [] 00:29:48.003 }, 00:29:48.003 { 00:29:48.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.003 "subtype": "NVMe", 00:29:48.003 "listen_addresses": [ 00:29:48.003 { 00:29:48.003 "trtype": "TCP", 00:29:48.003 "adrfam": "IPv4", 00:29:48.003 "traddr": "10.0.0.2", 00:29:48.003 "trsvcid": "4420" 00:29:48.003 } 00:29:48.003 ], 00:29:48.003 "allow_any_host": true, 00:29:48.003 "hosts": [], 00:29:48.003 "serial_number": "SPDK00000000000001", 00:29:48.003 "model_number": "SPDK bdev Controller", 00:29:48.003 "max_namespaces": 2, 00:29:48.003 "min_cntlid": 1, 00:29:48.003 "max_cntlid": 65519, 00:29:48.003 "namespaces": [ 00:29:48.003 { 00:29:48.003 "nsid": 1, 00:29:48.003 "bdev_name": "Malloc0", 00:29:48.003 "name": "Malloc0", 00:29:48.003 "nguid": "71E032D631044D5CA7DA0623B4DF65E4", 00:29:48.003 "uuid": "71e032d6-3104-4d5c-a7da-0623b4df65e4" 00:29:48.003 }, 00:29:48.003 { 00:29:48.003 "nsid": 2, 00:29:48.003 "bdev_name": "Malloc1", 00:29:48.003 "name": "Malloc1", 00:29:48.003 "nguid": "427CD7D66D354E56BA4B42880D2395B0", 00:29:48.003 "uuid": "427cd7d6-6d35-4e56-ba4b-42880d2395b0" 00:29:48.003 } 00:29:48.003 ] 00:29:48.003 } 00:29:48.003 ] 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 502701 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.003 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.261 rmmod nvme_tcp 00:29:48.261 rmmod nvme_fabrics 00:29:48.261 rmmod nvme_keyring 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 502420 ']' 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 502420 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 502420 ']' 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 502420 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 502420 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 502420' 00:29:48.261 killing process with pid 502420 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 502420 00:29:48.261 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 502420 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.520 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.056 00:29:51.056 real 0m11.296s 00:29:51.056 user 0m8.163s 00:29:51.056 sys 0m6.079s 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:51.056 ************************************ 00:29:51.056 END TEST nvmf_aer 00:29:51.056 ************************************ 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.056 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.056 ************************************ 00:29:51.056 START TEST nvmf_async_init 00:29:51.056 ************************************ 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:51.056 * Looking for test storage... 00:29:51.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.056 --rc genhtml_branch_coverage=1 00:29:51.056 --rc genhtml_function_coverage=1 00:29:51.056 --rc genhtml_legend=1 00:29:51.056 --rc geninfo_all_blocks=1 00:29:51.056 --rc geninfo_unexecuted_blocks=1 00:29:51.056 00:29:51.056 ' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.056 --rc genhtml_branch_coverage=1 00:29:51.056 --rc genhtml_function_coverage=1 00:29:51.056 --rc genhtml_legend=1 00:29:51.056 --rc geninfo_all_blocks=1 00:29:51.056 --rc geninfo_unexecuted_blocks=1 00:29:51.056 00:29:51.056 ' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.056 --rc genhtml_branch_coverage=1 00:29:51.056 --rc genhtml_function_coverage=1 00:29:51.056 --rc genhtml_legend=1 00:29:51.056 --rc geninfo_all_blocks=1 00:29:51.056 --rc geninfo_unexecuted_blocks=1 00:29:51.056 00:29:51.056 ' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.056 --rc genhtml_branch_coverage=1 00:29:51.056 --rc genhtml_function_coverage=1 00:29:51.056 --rc genhtml_legend=1 00:29:51.056 --rc geninfo_all_blocks=1 00:29:51.056 --rc geninfo_unexecuted_blocks=1 00:29:51.056 00:29:51.056 ' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dc58478933754089bd5142226e02d1f4 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.056 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.057 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.057 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.057 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.057 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.057 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.179 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.179 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.179 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.179 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.179 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:29:59.180 00:29:59.180 --- 10.0.0.2 ping statistics --- 00:29:59.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.180 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:59.180 00:29:59.180 --- 10.0.0.1 ping statistics --- 00:29:59.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.180 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=506393 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 506393 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 506393 ']' 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.180 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 [2024-12-10 00:11:42.645418] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:29:59.180 [2024-12-10 00:11:42.645466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.180 [2024-12-10 00:11:42.739842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.180 [2024-12-10 00:11:42.779032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.180 [2024-12-10 00:11:42.779068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.180 [2024-12-10 00:11:42.779079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.180 [2024-12-10 00:11:42.779087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.180 [2024-12-10 00:11:42.779095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.180 [2024-12-10 00:11:42.779666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 [2024-12-10 00:11:43.533568] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 null0 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dc58478933754089bd5142226e02d1f4 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.180 [2024-12-10 00:11:43.577838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.180 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 nvme0n1 00:29:59.439 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.439 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.439 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.439 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 [ 00:29:59.439 { 00:29:59.439 "name": "nvme0n1", 00:29:59.439 "aliases": [ 00:29:59.439 "dc584789-3375-4089-bd51-42226e02d1f4" 00:29:59.439 ], 00:29:59.439 "product_name": "NVMe disk", 00:29:59.439 "block_size": 512, 00:29:59.439 "num_blocks": 2097152, 00:29:59.439 "uuid": "dc584789-3375-4089-bd51-42226e02d1f4", 00:29:59.439 "numa_id": 1, 00:29:59.439 "assigned_rate_limits": { 00:29:59.440 "rw_ios_per_sec": 0, 00:29:59.440 "rw_mbytes_per_sec": 0, 00:29:59.440 "r_mbytes_per_sec": 0, 00:29:59.440 "w_mbytes_per_sec": 0 00:29:59.440 }, 00:29:59.440 "claimed": false, 00:29:59.440 "zoned": false, 00:29:59.440 "supported_io_types": { 00:29:59.440 "read": true, 00:29:59.440 "write": true, 00:29:59.440 "unmap": false, 00:29:59.440 "flush": true, 00:29:59.440 "reset": true, 00:29:59.440 "nvme_admin": true, 00:29:59.440 "nvme_io": true, 00:29:59.440 "nvme_io_md": false, 00:29:59.440 "write_zeroes": true, 00:29:59.440 "zcopy": false, 00:29:59.440 "get_zone_info": false, 00:29:59.440 "zone_management": false, 00:29:59.440 "zone_append": false, 00:29:59.440 "compare": true, 00:29:59.440 "compare_and_write": true, 00:29:59.440 "abort": true, 00:29:59.440 "seek_hole": false, 00:29:59.440 "seek_data": false, 00:29:59.440 "copy": true, 00:29:59.440 "nvme_iov_md": false 00:29:59.440 }, 00:29:59.440 "memory_domains": [ 00:29:59.440 { 00:29:59.440 "dma_device_id": "system", 00:29:59.440 "dma_device_type": 1 00:29:59.440 } 00:29:59.440 ], 00:29:59.440 "driver_specific": { 00:29:59.440 "nvme": [ 00:29:59.440 { 00:29:59.440 "trid": { 00:29:59.440 "trtype": "TCP", 00:29:59.440 "adrfam": "IPv4", 00:29:59.440 "traddr": "10.0.0.2", 00:29:59.440 "trsvcid": "4420", 00:29:59.440 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.440 }, 00:29:59.440 "ctrlr_data": { 00:29:59.440 "cntlid": 1, 00:29:59.440 "vendor_id": "0x8086", 00:29:59.440 "model_number": "SPDK bdev Controller", 00:29:59.440 "serial_number": "00000000000000000000", 00:29:59.440 "firmware_revision": "25.01", 00:29:59.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.440 "oacs": { 00:29:59.440 "security": 0, 00:29:59.440 "format": 0, 00:29:59.440 "firmware": 0, 00:29:59.440 "ns_manage": 0 00:29:59.440 }, 00:29:59.440 "multi_ctrlr": true, 00:29:59.440 "ana_reporting": false 00:29:59.440 }, 00:29:59.440 "vs": { 00:29:59.440 "nvme_version": "1.3" 00:29:59.440 }, 00:29:59.440 "ns_data": { 00:29:59.440 "id": 1, 00:29:59.440 "can_share": true 00:29:59.440 } 00:29:59.440 } 00:29:59.440 ], 00:29:59.440 "mp_policy": "active_passive" 00:29:59.440 } 00:29:59.440 } 00:29:59.440 ] 00:29:59.440 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.440 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:59.440 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.440 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.440 [2024-12-10 00:11:43.847469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:59.440 [2024-12-10 00:11:43.847528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129cd00 (9): Bad file descriptor 00:29:59.699 [2024-12-10 00:11:43.979910] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:59.699 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.699 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 [ 00:29:59.699 { 00:29:59.699 "name": "nvme0n1", 00:29:59.699 "aliases": [ 00:29:59.699 "dc584789-3375-4089-bd51-42226e02d1f4" 00:29:59.699 ], 00:29:59.699 "product_name": "NVMe disk", 00:29:59.699 "block_size": 512, 00:29:59.699 "num_blocks": 2097152, 00:29:59.699 "uuid": "dc584789-3375-4089-bd51-42226e02d1f4", 00:29:59.699 "numa_id": 1, 00:29:59.699 "assigned_rate_limits": { 00:29:59.699 "rw_ios_per_sec": 0, 00:29:59.699 "rw_mbytes_per_sec": 0, 00:29:59.699 "r_mbytes_per_sec": 0, 00:29:59.699 "w_mbytes_per_sec": 0 00:29:59.699 }, 00:29:59.699 "claimed": false, 00:29:59.699 "zoned": false, 00:29:59.699 "supported_io_types": { 00:29:59.699 "read": true, 00:29:59.699 "write": true, 00:29:59.699 "unmap": false, 00:29:59.699 "flush": true, 00:29:59.699 "reset": true, 00:29:59.699 "nvme_admin": true, 00:29:59.699 "nvme_io": true, 00:29:59.699 "nvme_io_md": false, 00:29:59.699 "write_zeroes": true, 00:29:59.699 "zcopy": false, 00:29:59.699 "get_zone_info": false, 00:29:59.699 "zone_management": false, 00:29:59.699 "zone_append": false, 00:29:59.699 "compare": true, 00:29:59.699 "compare_and_write": true, 00:29:59.699 "abort": true, 00:29:59.699 "seek_hole": false, 00:29:59.699 "seek_data": false, 00:29:59.699 "copy": true, 00:29:59.699 "nvme_iov_md": false 00:29:59.699 }, 00:29:59.699 "memory_domains": [ 00:29:59.699 { 00:29:59.699 "dma_device_id": "system", 00:29:59.699 "dma_device_type": 1 00:29:59.699 } 00:29:59.699 ], 00:29:59.699 "driver_specific": { 00:29:59.699 "nvme": [ 00:29:59.699 { 00:29:59.699 "trid": { 00:29:59.699 "trtype": "TCP", 00:29:59.699 "adrfam": "IPv4", 00:29:59.699 "traddr": "10.0.0.2", 00:29:59.699 "trsvcid": "4420", 00:29:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.699 }, 00:29:59.699 "ctrlr_data": { 00:29:59.699 "cntlid": 2, 00:29:59.699 "vendor_id": "0x8086", 00:29:59.699 "model_number": "SPDK bdev Controller", 00:29:59.699 "serial_number": "00000000000000000000", 00:29:59.699 "firmware_revision": "25.01", 00:29:59.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.699 "oacs": { 00:29:59.699 "security": 0, 00:29:59.699 "format": 0, 00:29:59.699 "firmware": 0, 00:29:59.699 "ns_manage": 0 00:29:59.699 }, 00:29:59.699 "multi_ctrlr": true, 00:29:59.699 "ana_reporting": false 00:29:59.699 }, 00:29:59.699 "vs": { 00:29:59.699 "nvme_version": "1.3" 00:29:59.699 }, 00:29:59.699 "ns_data": { 00:29:59.699 "id": 1, 00:29:59.699 "can_share": true 00:29:59.699 } 00:29:59.699 } 00:29:59.699 ], 00:29:59.699 "mp_policy": "active_passive" 00:29:59.699 } 00:29:59.699 } 00:29:59.699 ] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.L0iFOzPy9o 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.L0iFOzPy9o 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.L0iFOzPy9o 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 [2024-12-10 00:11:44.068144] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:59.699 [2024-12-10 00:11:44.068249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.699 [2024-12-10 00:11:44.084197] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:59.699 nvme0n1 00:29:59.699 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.700 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:59.700 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.700 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.700 [ 00:29:59.700 { 00:29:59.700 "name": "nvme0n1", 00:29:59.700 "aliases": [ 00:29:59.700 "dc584789-3375-4089-bd51-42226e02d1f4" 00:29:59.700 ], 00:29:59.700 "product_name": "NVMe disk", 00:29:59.700 "block_size": 512, 00:29:59.700 "num_blocks": 2097152, 00:29:59.700 "uuid": "dc584789-3375-4089-bd51-42226e02d1f4", 00:29:59.700 "numa_id": 1, 00:29:59.700 "assigned_rate_limits": { 00:29:59.700 "rw_ios_per_sec": 0, 00:29:59.700 "rw_mbytes_per_sec": 0, 00:29:59.700 "r_mbytes_per_sec": 0, 00:29:59.700 "w_mbytes_per_sec": 0 00:29:59.700 }, 00:29:59.700 "claimed": false, 00:29:59.700 "zoned": false, 00:29:59.700 "supported_io_types": { 00:29:59.700 "read": true, 00:29:59.700 "write": true, 00:29:59.700 "unmap": false, 00:29:59.700 "flush": true, 00:29:59.700 "reset": true, 00:29:59.700 "nvme_admin": true, 00:29:59.700 "nvme_io": true, 00:29:59.700 "nvme_io_md": false, 00:29:59.700 "write_zeroes": true, 00:29:59.700 "zcopy": false, 00:29:59.700 "get_zone_info": false, 00:29:59.700 "zone_management": false, 00:29:59.700 "zone_append": false, 00:29:59.700 "compare": true, 00:29:59.700 "compare_and_write": true, 00:29:59.700 "abort": true, 00:29:59.700 "seek_hole": false, 00:29:59.700 "seek_data": false, 00:29:59.700 "copy": true, 00:29:59.700 "nvme_iov_md": false 00:29:59.700 }, 00:29:59.700 "memory_domains": [ 00:29:59.700 { 00:29:59.700 "dma_device_id": "system", 00:29:59.700 "dma_device_type": 1 00:29:59.700 } 00:29:59.700 ], 00:29:59.700 "driver_specific": { 00:29:59.700 "nvme": [ 00:29:59.700 { 00:29:59.700 "trid": { 00:29:59.700 "trtype": "TCP", 00:29:59.700 "adrfam": "IPv4", 00:29:59.700 "traddr": "10.0.0.2", 00:29:59.700 "trsvcid": "4421", 00:29:59.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:59.700 }, 00:29:59.700 "ctrlr_data": { 00:29:59.700 "cntlid": 3, 00:29:59.700 "vendor_id": "0x8086", 00:29:59.700 "model_number": "SPDK bdev Controller", 00:29:59.700 "serial_number": "00000000000000000000", 00:29:59.959 "firmware_revision": "25.01", 00:29:59.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.959 "oacs": { 00:29:59.959 "security": 0, 00:29:59.959 "format": 0, 00:29:59.959 "firmware": 0, 00:29:59.959 "ns_manage": 0 00:29:59.959 }, 00:29:59.959 "multi_ctrlr": true, 00:29:59.959 "ana_reporting": false 00:29:59.959 }, 00:29:59.959 "vs": { 00:29:59.959 "nvme_version": "1.3" 00:29:59.959 }, 00:29:59.959 "ns_data": { 00:29:59.959 "id": 1, 00:29:59.959 "can_share": true 00:29:59.959 } 00:29:59.959 } 00:29:59.959 ], 00:29:59.959 "mp_policy": "active_passive" 00:29:59.959 } 00:29:59.959 } 00:29:59.959 ] 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.L0iFOzPy9o 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.959 rmmod nvme_tcp 00:29:59.959 rmmod nvme_fabrics 00:29:59.959 rmmod nvme_keyring 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 506393 ']' 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 506393 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 506393 ']' 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 506393 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506393 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506393' 00:29:59.959 killing process with pid 506393 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 506393 00:29:59.959 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 506393 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.219 00:11:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.123 00:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.123 00:30:02.123 real 0m11.529s 00:30:02.123 user 0m4.173s 00:30:02.123 sys 0m6.078s 00:30:02.123 00:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.123 00:11:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:02.123 ************************************ 00:30:02.123 END TEST nvmf_async_init 00:30:02.123 ************************************ 00:30:02.382 00:11:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.383 ************************************ 00:30:02.383 START TEST dma 00:30:02.383 ************************************ 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:02.383 * Looking for test storage... 00:30:02.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:02.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.383 --rc genhtml_branch_coverage=1 00:30:02.383 --rc genhtml_function_coverage=1 00:30:02.383 --rc genhtml_legend=1 00:30:02.383 --rc geninfo_all_blocks=1 00:30:02.383 --rc geninfo_unexecuted_blocks=1 00:30:02.383 00:30:02.383 ' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:02.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.383 --rc genhtml_branch_coverage=1 00:30:02.383 --rc genhtml_function_coverage=1 00:30:02.383 --rc genhtml_legend=1 00:30:02.383 --rc geninfo_all_blocks=1 00:30:02.383 --rc geninfo_unexecuted_blocks=1 00:30:02.383 00:30:02.383 ' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:02.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.383 --rc genhtml_branch_coverage=1 00:30:02.383 --rc genhtml_function_coverage=1 00:30:02.383 --rc genhtml_legend=1 00:30:02.383 --rc geninfo_all_blocks=1 00:30:02.383 --rc geninfo_unexecuted_blocks=1 00:30:02.383 00:30:02.383 ' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:02.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.383 --rc genhtml_branch_coverage=1 00:30:02.383 --rc genhtml_function_coverage=1 00:30:02.383 --rc genhtml_legend=1 00:30:02.383 --rc geninfo_all_blocks=1 00:30:02.383 --rc geninfo_unexecuted_blocks=1 00:30:02.383 00:30:02.383 ' 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.383 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.643 00:11:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:02.644 00:30:02.644 real 0m0.230s 00:30:02.644 user 0m0.134s 00:30:02.644 sys 0m0.113s 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:02.644 ************************************ 00:30:02.644 END TEST dma 00:30:02.644 ************************************ 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.644 ************************************ 00:30:02.644 START TEST nvmf_identify 00:30:02.644 ************************************ 00:30:02.644 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:02.644 * Looking for test storage... 00:30:02.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.644 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:02.644 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:02.644 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:02.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.904 --rc genhtml_branch_coverage=1 00:30:02.904 --rc genhtml_function_coverage=1 00:30:02.904 --rc genhtml_legend=1 00:30:02.904 --rc geninfo_all_blocks=1 00:30:02.904 --rc geninfo_unexecuted_blocks=1 00:30:02.904 00:30:02.904 ' 00:30:02.904 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:02.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.904 --rc genhtml_branch_coverage=1 00:30:02.904 --rc genhtml_function_coverage=1 00:30:02.904 --rc genhtml_legend=1 00:30:02.905 --rc geninfo_all_blocks=1 00:30:02.905 --rc geninfo_unexecuted_blocks=1 00:30:02.905 00:30:02.905 ' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:02.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.905 --rc genhtml_branch_coverage=1 00:30:02.905 --rc genhtml_function_coverage=1 00:30:02.905 --rc genhtml_legend=1 00:30:02.905 --rc geninfo_all_blocks=1 00:30:02.905 --rc geninfo_unexecuted_blocks=1 00:30:02.905 00:30:02.905 ' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:02.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.905 --rc genhtml_branch_coverage=1 00:30:02.905 --rc genhtml_function_coverage=1 00:30:02.905 --rc genhtml_legend=1 00:30:02.905 --rc geninfo_all_blocks=1 00:30:02.905 --rc geninfo_unexecuted_blocks=1 00:30:02.905 00:30:02.905 ' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.905 00:11:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:11.034 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:11.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:11.035 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:11.035 Found net devices under 0000:af:00.0: cvl_0_0 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:11.035 Found net devices under 0000:af:00.1: cvl_0_1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:30:11.035 00:30:11.035 --- 10.0.0.2 ping statistics --- 00:30:11.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.035 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:11.035 00:30:11.035 --- 10.0.0.1 ping statistics --- 00:30:11.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.035 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=510630 00:30:11.035 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 510630 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 510630 ']' 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.036 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.036 [2024-12-10 00:11:54.580676] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:30:11.036 [2024-12-10 00:11:54.580731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.036 [2024-12-10 00:11:54.679252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.036 [2024-12-10 00:11:54.721755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.036 [2024-12-10 00:11:54.721791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.036 [2024-12-10 00:11:54.721801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.036 [2024-12-10 00:11:54.721809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.036 [2024-12-10 00:11:54.721816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.036 [2024-12-10 00:11:54.723402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.036 [2024-12-10 00:11:54.723515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.036 [2024-12-10 00:11:54.723621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.036 [2024-12-10 00:11:54.723622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.036 [2024-12-10 00:11:55.426363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.036 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 Malloc0 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 [2024-12-10 00:11:55.532899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.298 [ 00:30:11.298 { 00:30:11.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:11.298 "subtype": "Discovery", 00:30:11.298 "listen_addresses": [ 00:30:11.298 { 00:30:11.298 "trtype": "TCP", 00:30:11.298 "adrfam": "IPv4", 00:30:11.298 "traddr": "10.0.0.2", 00:30:11.298 "trsvcid": "4420" 00:30:11.298 } 00:30:11.298 ], 00:30:11.298 "allow_any_host": true, 00:30:11.298 "hosts": [] 00:30:11.298 }, 00:30:11.298 { 00:30:11.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.298 "subtype": "NVMe", 00:30:11.298 "listen_addresses": [ 00:30:11.298 { 00:30:11.298 "trtype": "TCP", 00:30:11.298 "adrfam": "IPv4", 00:30:11.298 "traddr": "10.0.0.2", 00:30:11.298 "trsvcid": "4420" 00:30:11.298 } 00:30:11.298 ], 00:30:11.298 "allow_any_host": true, 00:30:11.298 "hosts": [], 00:30:11.298 "serial_number": "SPDK00000000000001", 00:30:11.298 "model_number": "SPDK bdev Controller", 00:30:11.298 "max_namespaces": 32, 00:30:11.298 "min_cntlid": 1, 00:30:11.298 "max_cntlid": 65519, 00:30:11.298 "namespaces": [ 00:30:11.298 { 00:30:11.298 "nsid": 1, 00:30:11.298 "bdev_name": "Malloc0", 00:30:11.298 "name": "Malloc0", 00:30:11.298 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:11.298 "eui64": "ABCDEF0123456789", 00:30:11.298 "uuid": "b28da48b-1b99-4f71-a44c-cb773438733e" 00:30:11.298 } 00:30:11.298 ] 00:30:11.298 } 00:30:11.298 ] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.298 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:11.298 [2024-12-10 00:11:55.589427] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:30:11.298 [2024-12-10 00:11:55.589461] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510703 ] 00:30:11.298 [2024-12-10 00:11:55.629166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:11.298 [2024-12-10 00:11:55.629216] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:11.298 [2024-12-10 00:11:55.629223] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:11.298 [2024-12-10 00:11:55.629235] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:11.298 [2024-12-10 00:11:55.629247] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:11.298 [2024-12-10 00:11:55.633154] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:11.298 [2024-12-10 00:11:55.633194] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa57690 0 00:30:11.298 [2024-12-10 00:11:55.633302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:11.298 [2024-12-10 00:11:55.633312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:11.298 [2024-12-10 00:11:55.633322] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:11.298 [2024-12-10 00:11:55.633327] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:11.298 [2024-12-10 00:11:55.633360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.633367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.633373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.298 [2024-12-10 00:11:55.633386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:11.298 [2024-12-10 00:11:55.633402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.298 [2024-12-10 00:11:55.639838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.298 [2024-12-10 00:11:55.639848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.298 [2024-12-10 00:11:55.639853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.639858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.298 [2024-12-10 00:11:55.639870] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:11.298 [2024-12-10 00:11:55.639878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:11.298 [2024-12-10 00:11:55.639884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:11.298 [2024-12-10 00:11:55.639902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.639907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.639911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.298 [2024-12-10 00:11:55.639919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.298 [2024-12-10 00:11:55.639933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.298 [2024-12-10 00:11:55.640094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.298 [2024-12-10 00:11:55.640101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.298 [2024-12-10 00:11:55.640105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.298 [2024-12-10 00:11:55.640119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:11.298 [2024-12-10 00:11:55.640128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:11.298 [2024-12-10 00:11:55.640135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.298 [2024-12-10 00:11:55.640151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.298 [2024-12-10 00:11:55.640163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.298 [2024-12-10 00:11:55.640230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.298 [2024-12-10 00:11:55.640237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.298 [2024-12-10 00:11:55.640242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.298 [2024-12-10 00:11:55.640255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:11.298 [2024-12-10 00:11:55.640264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:11.298 [2024-12-10 00:11:55.640272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.298 [2024-12-10 00:11:55.640288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.298 [2024-12-10 00:11:55.640299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.298 [2024-12-10 00:11:55.640363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.298 [2024-12-10 00:11:55.640370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.298 [2024-12-10 00:11:55.640374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.298 [2024-12-10 00:11:55.640385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:11.298 [2024-12-10 00:11:55.640395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.298 [2024-12-10 00:11:55.640400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.640411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.299 [2024-12-10 00:11:55.640422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.640483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.640490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.640494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.640504] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:11.299 [2024-12-10 00:11:55.640511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:11.299 [2024-12-10 00:11:55.640519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:11.299 [2024-12-10 00:11:55.640629] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:11.299 [2024-12-10 00:11:55.640636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:11.299 [2024-12-10 00:11:55.640646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.640662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.299 [2024-12-10 00:11:55.640674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.640737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.640744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.640749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.640759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:11.299 [2024-12-10 00:11:55.640768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.640784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.299 [2024-12-10 00:11:55.640796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.640866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.640873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.640878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.640888] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:11.299 [2024-12-10 00:11:55.640894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.640903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:11.299 [2024-12-10 00:11:55.640914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.640925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.640929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.640936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.299 [2024-12-10 00:11:55.640948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.641039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.299 [2024-12-10 00:11:55.641047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.299 [2024-12-10 00:11:55.641051] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641056] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa57690): datao=0, datal=4096, cccid=0 00:30:11.299 [2024-12-10 00:11:55.641062] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xab9100) on tqpair(0xa57690): expected_datao=0, payload_size=4096 00:30:11.299 [2024-12-10 00:11:55.641068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641076] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641081] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.641097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.641102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.641120] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:11.299 [2024-12-10 00:11:55.641127] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:11.299 [2024-12-10 00:11:55.641133] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:11.299 [2024-12-10 00:11:55.641140] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:11.299 [2024-12-10 00:11:55.641146] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:11.299 [2024-12-10 00:11:55.641152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.641162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.641169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:11.299 [2024-12-10 00:11:55.641198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.641266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.641273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.641277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.641290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.299 [2024-12-10 00:11:55.641313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.299 [2024-12-10 00:11:55.641335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.299 [2024-12-10 00:11:55.641357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.299 [2024-12-10 00:11:55.641378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.641392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:11.299 [2024-12-10 00:11:55.641400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa57690) 00:30:11.299 [2024-12-10 00:11:55.641411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.299 [2024-12-10 00:11:55.641424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9100, cid 0, qid 0 00:30:11.299 [2024-12-10 00:11:55.641430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9280, cid 1, qid 0 00:30:11.299 [2024-12-10 00:11:55.641435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9400, cid 2, qid 0 00:30:11.299 [2024-12-10 00:11:55.641440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.299 [2024-12-10 00:11:55.641446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9700, cid 4, qid 0 00:30:11.299 [2024-12-10 00:11:55.641541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.299 [2024-12-10 00:11:55.641548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.299 [2024-12-10 00:11:55.641553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.299 [2024-12-10 00:11:55.641558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9700) on tqpair=0xa57690 00:30:11.299 [2024-12-10 00:11:55.641563] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:11.299 [2024-12-10 00:11:55.641570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:11.299 [2024-12-10 00:11:55.641580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.641585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa57690) 00:30:11.300 [2024-12-10 00:11:55.641592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.300 [2024-12-10 00:11:55.641604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9700, cid 4, qid 0 00:30:11.300 [2024-12-10 00:11:55.641679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.300 [2024-12-10 00:11:55.641686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.300 [2024-12-10 00:11:55.641691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.641696] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa57690): datao=0, datal=4096, cccid=4 00:30:11.300 [2024-12-10 00:11:55.641701] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xab9700) on tqpair(0xa57690): expected_datao=0, payload_size=4096 00:30:11.300 [2024-12-10 00:11:55.641707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.641714] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.641718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.681976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.300 [2024-12-10 00:11:55.681989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.300 [2024-12-10 00:11:55.681993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.681998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9700) on tqpair=0xa57690 00:30:11.300 [2024-12-10 00:11:55.682014] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:11.300 [2024-12-10 00:11:55.682039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa57690) 00:30:11.300 [2024-12-10 00:11:55.682055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.300 [2024-12-10 00:11:55.682063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa57690) 00:30:11.300 [2024-12-10 00:11:55.682079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.300 [2024-12-10 00:11:55.682097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9700, cid 4, qid 0 00:30:11.300 [2024-12-10 00:11:55.682103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9880, cid 5, qid 0 00:30:11.300 [2024-12-10 00:11:55.682204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.300 [2024-12-10 00:11:55.682211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.300 [2024-12-10 00:11:55.682215] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682220] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa57690): datao=0, datal=1024, cccid=4 00:30:11.300 [2024-12-10 00:11:55.682226] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xab9700) on tqpair(0xa57690): expected_datao=0, payload_size=1024 00:30:11.300 [2024-12-10 00:11:55.682231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682238] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.300 [2024-12-10 00:11:55.682255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.300 [2024-12-10 00:11:55.682259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.682264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9880) on tqpair=0xa57690 00:30:11.300 [2024-12-10 00:11:55.723966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.300 [2024-12-10 00:11:55.723978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.300 [2024-12-10 00:11:55.723983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.723987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9700) on tqpair=0xa57690 00:30:11.300 [2024-12-10 00:11:55.723999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa57690) 00:30:11.300 [2024-12-10 00:11:55.724012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.300 [2024-12-10 00:11:55.724030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9700, cid 4, qid 0 00:30:11.300 [2024-12-10 00:11:55.724123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.300 [2024-12-10 00:11:55.724129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.300 [2024-12-10 00:11:55.724134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa57690): datao=0, datal=3072, cccid=4 00:30:11.300 [2024-12-10 00:11:55.724144] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xab9700) on tqpair(0xa57690): expected_datao=0, payload_size=3072 00:30:11.300 [2024-12-10 00:11:55.724150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724162] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724167] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.300 [2024-12-10 00:11:55.724200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.300 [2024-12-10 00:11:55.724204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9700) on tqpair=0xa57690 00:30:11.300 [2024-12-10 00:11:55.724218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa57690) 00:30:11.300 [2024-12-10 00:11:55.724230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.300 [2024-12-10 00:11:55.724246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9700, cid 4, qid 0 00:30:11.300 [2024-12-10 00:11:55.724316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.300 [2024-12-10 00:11:55.724323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.300 [2024-12-10 00:11:55.724328] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724332] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa57690): datao=0, datal=8, cccid=4 00:30:11.300 [2024-12-10 00:11:55.724338] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xab9700) on tqpair(0xa57690): expected_datao=0, payload_size=8 00:30:11.300 [2024-12-10 00:11:55.724343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724350] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.724354] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.764974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.300 [2024-12-10 00:11:55.764985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.300 [2024-12-10 00:11:55.764989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.300 [2024-12-10 00:11:55.764994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9700) on tqpair=0xa57690 00:30:11.300 ===================================================== 00:30:11.300 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:11.300 ===================================================== 00:30:11.300 Controller Capabilities/Features 00:30:11.300 ================================ 00:30:11.300 Vendor ID: 0000 00:30:11.300 Subsystem Vendor ID: 0000 00:30:11.300 Serial Number: .................... 00:30:11.300 Model Number: ........................................ 00:30:11.300 Firmware Version: 25.01 00:30:11.300 Recommended Arb Burst: 0 00:30:11.300 IEEE OUI Identifier: 00 00 00 00:30:11.300 Multi-path I/O 00:30:11.300 May have multiple subsystem ports: No 00:30:11.300 May have multiple controllers: No 00:30:11.300 Associated with SR-IOV VF: No 00:30:11.300 Max Data Transfer Size: 131072 00:30:11.300 Max Number of Namespaces: 0 00:30:11.300 Max Number of I/O Queues: 1024 00:30:11.300 NVMe Specification Version (VS): 1.3 00:30:11.300 NVMe Specification Version (Identify): 1.3 00:30:11.300 Maximum Queue Entries: 128 00:30:11.300 Contiguous Queues Required: Yes 00:30:11.300 Arbitration Mechanisms Supported 00:30:11.300 Weighted Round Robin: Not Supported 00:30:11.300 Vendor Specific: Not Supported 00:30:11.300 Reset Timeout: 15000 ms 00:30:11.300 Doorbell Stride: 4 bytes 00:30:11.300 NVM Subsystem Reset: Not Supported 00:30:11.300 Command Sets Supported 00:30:11.300 NVM Command Set: Supported 00:30:11.300 Boot Partition: Not Supported 00:30:11.300 Memory Page Size Minimum: 4096 bytes 00:30:11.300 Memory Page Size Maximum: 4096 bytes 00:30:11.300 Persistent Memory Region: Not Supported 00:30:11.300 Optional Asynchronous Events Supported 00:30:11.300 Namespace Attribute Notices: Not Supported 00:30:11.300 Firmware Activation Notices: Not Supported 00:30:11.300 ANA Change Notices: Not Supported 00:30:11.300 PLE Aggregate Log Change Notices: Not Supported 00:30:11.300 LBA Status Info Alert Notices: Not Supported 00:30:11.300 EGE Aggregate Log Change Notices: Not Supported 00:30:11.300 Normal NVM Subsystem Shutdown event: Not Supported 00:30:11.300 Zone Descriptor Change Notices: Not Supported 00:30:11.300 Discovery Log Change Notices: Supported 00:30:11.300 Controller Attributes 00:30:11.300 128-bit Host Identifier: Not Supported 00:30:11.300 Non-Operational Permissive Mode: Not Supported 00:30:11.300 NVM Sets: Not Supported 00:30:11.300 Read Recovery Levels: Not Supported 00:30:11.300 Endurance Groups: Not Supported 00:30:11.300 Predictable Latency Mode: Not Supported 00:30:11.300 Traffic Based Keep ALive: Not Supported 00:30:11.300 Namespace Granularity: Not Supported 00:30:11.300 SQ Associations: Not Supported 00:30:11.300 UUID List: Not Supported 00:30:11.300 Multi-Domain Subsystem: Not Supported 00:30:11.300 Fixed Capacity Management: Not Supported 00:30:11.300 Variable Capacity Management: Not Supported 00:30:11.300 Delete Endurance Group: Not Supported 00:30:11.301 Delete NVM Set: Not Supported 00:30:11.301 Extended LBA Formats Supported: Not Supported 00:30:11.301 Flexible Data Placement Supported: Not Supported 00:30:11.301 00:30:11.301 Controller Memory Buffer Support 00:30:11.301 ================================ 00:30:11.301 Supported: No 00:30:11.301 00:30:11.301 Persistent Memory Region Support 00:30:11.301 ================================ 00:30:11.301 Supported: No 00:30:11.301 00:30:11.301 Admin Command Set Attributes 00:30:11.301 ============================ 00:30:11.301 Security Send/Receive: Not Supported 00:30:11.301 Format NVM: Not Supported 00:30:11.301 Firmware Activate/Download: Not Supported 00:30:11.301 Namespace Management: Not Supported 00:30:11.301 Device Self-Test: Not Supported 00:30:11.301 Directives: Not Supported 00:30:11.301 NVMe-MI: Not Supported 00:30:11.301 Virtualization Management: Not Supported 00:30:11.301 Doorbell Buffer Config: Not Supported 00:30:11.301 Get LBA Status Capability: Not Supported 00:30:11.301 Command & Feature Lockdown Capability: Not Supported 00:30:11.301 Abort Command Limit: 1 00:30:11.301 Async Event Request Limit: 4 00:30:11.301 Number of Firmware Slots: N/A 00:30:11.301 Firmware Slot 1 Read-Only: N/A 00:30:11.301 Firmware Activation Without Reset: N/A 00:30:11.301 Multiple Update Detection Support: N/A 00:30:11.301 Firmware Update Granularity: No Information Provided 00:30:11.301 Per-Namespace SMART Log: No 00:30:11.301 Asymmetric Namespace Access Log Page: Not Supported 00:30:11.301 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:11.301 Command Effects Log Page: Not Supported 00:30:11.301 Get Log Page Extended Data: Supported 00:30:11.301 Telemetry Log Pages: Not Supported 00:30:11.301 Persistent Event Log Pages: Not Supported 00:30:11.301 Supported Log Pages Log Page: May Support 00:30:11.301 Commands Supported & Effects Log Page: Not Supported 00:30:11.301 Feature Identifiers & Effects Log Page:May Support 00:30:11.301 NVMe-MI Commands & Effects Log Page: May Support 00:30:11.301 Data Area 4 for Telemetry Log: Not Supported 00:30:11.301 Error Log Page Entries Supported: 128 00:30:11.301 Keep Alive: Not Supported 00:30:11.301 00:30:11.301 NVM Command Set Attributes 00:30:11.301 ========================== 00:30:11.301 Submission Queue Entry Size 00:30:11.301 Max: 1 00:30:11.301 Min: 1 00:30:11.301 Completion Queue Entry Size 00:30:11.301 Max: 1 00:30:11.301 Min: 1 00:30:11.301 Number of Namespaces: 0 00:30:11.301 Compare Command: Not Supported 00:30:11.301 Write Uncorrectable Command: Not Supported 00:30:11.301 Dataset Management Command: Not Supported 00:30:11.301 Write Zeroes Command: Not Supported 00:30:11.301 Set Features Save Field: Not Supported 00:30:11.301 Reservations: Not Supported 00:30:11.301 Timestamp: Not Supported 00:30:11.301 Copy: Not Supported 00:30:11.301 Volatile Write Cache: Not Present 00:30:11.301 Atomic Write Unit (Normal): 1 00:30:11.301 Atomic Write Unit (PFail): 1 00:30:11.301 Atomic Compare & Write Unit: 1 00:30:11.301 Fused Compare & Write: Supported 00:30:11.301 Scatter-Gather List 00:30:11.301 SGL Command Set: Supported 00:30:11.301 SGL Keyed: Supported 00:30:11.301 SGL Bit Bucket Descriptor: Not Supported 00:30:11.301 SGL Metadata Pointer: Not Supported 00:30:11.301 Oversized SGL: Not Supported 00:30:11.301 SGL Metadata Address: Not Supported 00:30:11.301 SGL Offset: Supported 00:30:11.301 Transport SGL Data Block: Not Supported 00:30:11.301 Replay Protected Memory Block: Not Supported 00:30:11.301 00:30:11.301 Firmware Slot Information 00:30:11.301 ========================= 00:30:11.301 Active slot: 0 00:30:11.301 00:30:11.301 00:30:11.301 Error Log 00:30:11.301 ========= 00:30:11.301 00:30:11.301 Active Namespaces 00:30:11.301 ================= 00:30:11.301 Discovery Log Page 00:30:11.301 ================== 00:30:11.301 Generation Counter: 2 00:30:11.301 Number of Records: 2 00:30:11.301 Record Format: 0 00:30:11.301 00:30:11.301 Discovery Log Entry 0 00:30:11.301 ---------------------- 00:30:11.301 Transport Type: 3 (TCP) 00:30:11.301 Address Family: 1 (IPv4) 00:30:11.301 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:11.301 Entry Flags: 00:30:11.301 Duplicate Returned Information: 1 00:30:11.301 Explicit Persistent Connection Support for Discovery: 1 00:30:11.301 Transport Requirements: 00:30:11.301 Secure Channel: Not Required 00:30:11.301 Port ID: 0 (0x0000) 00:30:11.301 Controller ID: 65535 (0xffff) 00:30:11.301 Admin Max SQ Size: 128 00:30:11.301 Transport Service Identifier: 4420 00:30:11.301 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:11.301 Transport Address: 10.0.0.2 00:30:11.301 Discovery Log Entry 1 00:30:11.301 ---------------------- 00:30:11.301 Transport Type: 3 (TCP) 00:30:11.301 Address Family: 1 (IPv4) 00:30:11.301 Subsystem Type: 2 (NVM Subsystem) 00:30:11.301 Entry Flags: 00:30:11.301 Duplicate Returned Information: 0 00:30:11.301 Explicit Persistent Connection Support for Discovery: 0 00:30:11.301 Transport Requirements: 00:30:11.301 Secure Channel: Not Required 00:30:11.301 Port ID: 0 (0x0000) 00:30:11.301 Controller ID: 65535 (0xffff) 00:30:11.301 Admin Max SQ Size: 128 00:30:11.301 Transport Service Identifier: 4420 00:30:11.301 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:11.301 Transport Address: 10.0.0.2 [2024-12-10 00:11:55.765081] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:11.301 [2024-12-10 00:11:55.765094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9100) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.301 [2024-12-10 00:11:55.765108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9280) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.301 [2024-12-10 00:11:55.765119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9400) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.301 [2024-12-10 00:11:55.765131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.301 [2024-12-10 00:11:55.765147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.301 [2024-12-10 00:11:55.765165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.301 [2024-12-10 00:11:55.765181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.301 [2024-12-10 00:11:55.765270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.301 [2024-12-10 00:11:55.765278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.301 [2024-12-10 00:11:55.765283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.301 [2024-12-10 00:11:55.765311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.301 [2024-12-10 00:11:55.765327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.301 [2024-12-10 00:11:55.765427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.301 [2024-12-10 00:11:55.765433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.301 [2024-12-10 00:11:55.765438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765448] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:11.301 [2024-12-10 00:11:55.765454] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:11.301 [2024-12-10 00:11:55.765465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.301 [2024-12-10 00:11:55.765481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.301 [2024-12-10 00:11:55.765492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.301 [2024-12-10 00:11:55.765562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.301 [2024-12-10 00:11:55.765568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.301 [2024-12-10 00:11:55.765573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.301 [2024-12-10 00:11:55.765588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.301 [2024-12-10 00:11:55.765593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.765604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.765615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.765677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.765684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.765688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.765703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.765719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.765731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.765799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.765806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.765810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.765829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.765845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.765857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.765919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.765926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.765930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.765944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.765953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.765960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.765971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.302 [2024-12-10 00:11:55.766624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.302 [2024-12-10 00:11:55.766638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.302 [2024-12-10 00:11:55.766648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.302 [2024-12-10 00:11:55.766654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.302 [2024-12-10 00:11:55.766665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.302 [2024-12-10 00:11:55.766727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.302 [2024-12-10 00:11:55.766733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.303 [2024-12-10 00:11:55.766739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.303 [2024-12-10 00:11:55.766754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.303 [2024-12-10 00:11:55.766770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-12-10 00:11:55.766781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.303 [2024-12-10 00:11:55.766846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.303 [2024-12-10 00:11:55.766853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.303 [2024-12-10 00:11:55.766858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.303 [2024-12-10 00:11:55.766872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.303 [2024-12-10 00:11:55.766888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-12-10 00:11:55.766900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.303 [2024-12-10 00:11:55.766961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.303 [2024-12-10 00:11:55.766967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.303 [2024-12-10 00:11:55.766972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.303 [2024-12-10 00:11:55.766986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.766995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.303 [2024-12-10 00:11:55.767002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-12-10 00:11:55.767013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.303 [2024-12-10 00:11:55.767074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.303 [2024-12-10 00:11:55.767080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.303 [2024-12-10 00:11:55.767085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.303 [2024-12-10 00:11:55.767099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.303 [2024-12-10 00:11:55.767115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-12-10 00:11:55.767126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.303 [2024-12-10 00:11:55.767185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.303 [2024-12-10 00:11:55.767192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.303 [2024-12-10 00:11:55.767196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.303 [2024-12-10 00:11:55.767212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.303 [2024-12-10 00:11:55.767222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.303 [2024-12-10 00:11:55.767229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-12-10 00:11:55.767240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.565 [2024-12-10 00:11:55.770830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.770840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.770844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.770849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.565 [2024-12-10 00:11:55.770860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.770865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.770869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa57690) 00:30:11.565 [2024-12-10 00:11:55.770876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.565 [2024-12-10 00:11:55.770889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xab9580, cid 3, qid 0 00:30:11.565 [2024-12-10 00:11:55.771033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.771040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.771044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.771049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xab9580) on tqpair=0xa57690 00:30:11.565 [2024-12-10 00:11:55.771057] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:11.565 00:30:11.565 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:11.565 [2024-12-10 00:11:55.811291] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:30:11.565 [2024-12-10 00:11:55.811339] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510760 ] 00:30:11.565 [2024-12-10 00:11:55.852839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:11.565 [2024-12-10 00:11:55.852881] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:11.565 [2024-12-10 00:11:55.852887] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:11.565 [2024-12-10 00:11:55.852899] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:11.565 [2024-12-10 00:11:55.852909] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:11.565 [2024-12-10 00:11:55.857027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:11.565 [2024-12-10 00:11:55.857056] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x647690 0 00:30:11.565 [2024-12-10 00:11:55.864840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:11.565 [2024-12-10 00:11:55.864854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:11.565 [2024-12-10 00:11:55.864861] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:11.565 [2024-12-10 00:11:55.864866] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:11.565 [2024-12-10 00:11:55.864896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.864902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.864907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.565 [2024-12-10 00:11:55.864917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:11.565 [2024-12-10 00:11:55.864935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.565 [2024-12-10 00:11:55.871833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.871843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.871848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.871853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.565 [2024-12-10 00:11:55.871865] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:11.565 [2024-12-10 00:11:55.871872] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:11.565 [2024-12-10 00:11:55.871878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:11.565 [2024-12-10 00:11:55.871893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.871898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.871902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.565 [2024-12-10 00:11:55.871910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.565 [2024-12-10 00:11:55.871924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.565 [2024-12-10 00:11:55.872005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.872012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.872017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.565 [2024-12-10 00:11:55.872029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:11.565 [2024-12-10 00:11:55.872038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:11.565 [2024-12-10 00:11:55.872046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.565 [2024-12-10 00:11:55.872061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.565 [2024-12-10 00:11:55.872074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.565 [2024-12-10 00:11:55.872139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.872146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.872150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.565 [2024-12-10 00:11:55.872163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:11.565 [2024-12-10 00:11:55.872172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:11.565 [2024-12-10 00:11:55.872179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.565 [2024-12-10 00:11:55.872195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.565 [2024-12-10 00:11:55.872207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.565 [2024-12-10 00:11:55.872272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.872279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.872283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.565 [2024-12-10 00:11:55.872293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:11.565 [2024-12-10 00:11:55.872303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.565 [2024-12-10 00:11:55.872319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.565 [2024-12-10 00:11:55.872330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.565 [2024-12-10 00:11:55.872396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.565 [2024-12-10 00:11:55.872402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.565 [2024-12-10 00:11:55.872407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.565 [2024-12-10 00:11:55.872411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.565 [2024-12-10 00:11:55.872416] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:11.565 [2024-12-10 00:11:55.872423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:11.565 [2024-12-10 00:11:55.872431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:11.565 [2024-12-10 00:11:55.872540] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:11.565 [2024-12-10 00:11:55.872546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:11.566 [2024-12-10 00:11:55.872554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.872570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-12-10 00:11:55.872581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.566 [2024-12-10 00:11:55.872662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.872669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.872675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.872685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:11.566 [2024-12-10 00:11:55.872695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.872711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-12-10 00:11:55.872722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.566 [2024-12-10 00:11:55.872784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.872791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.872795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.872805] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:11.566 [2024-12-10 00:11:55.872811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.872819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:11.566 [2024-12-10 00:11:55.872834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.872843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.872855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-12-10 00:11:55.872867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.566 [2024-12-10 00:11:55.872968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.566 [2024-12-10 00:11:55.872975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.566 [2024-12-10 00:11:55.872979] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.872984] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=4096, cccid=0 00:30:11.566 [2024-12-10 00:11:55.872990] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9100) on tqpair(0x647690): expected_datao=0, payload_size=4096 00:30:11.566 [2024-12-10 00:11:55.872995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873002] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873007] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.873027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.873032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.873047] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:11.566 [2024-12-10 00:11:55.873053] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:11.566 [2024-12-10 00:11:55.873060] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:11.566 [2024-12-10 00:11:55.873065] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:11.566 [2024-12-10 00:11:55.873072] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:11.566 [2024-12-10 00:11:55.873078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:11.566 [2024-12-10 00:11:55.873124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.566 [2024-12-10 00:11:55.873188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.873194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.873199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.873210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.566 [2024-12-10 00:11:55.873232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.566 [2024-12-10 00:11:55.873254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.566 [2024-12-10 00:11:55.873276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.566 [2024-12-10 00:11:55.873297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-12-10 00:11:55.873341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9100, cid 0, qid 0 00:30:11.566 [2024-12-10 00:11:55.873347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9280, cid 1, qid 0 00:30:11.566 [2024-12-10 00:11:55.873353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9400, cid 2, qid 0 00:30:11.566 [2024-12-10 00:11:55.873358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9580, cid 3, qid 0 00:30:11.566 [2024-12-10 00:11:55.873363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.566 [2024-12-10 00:11:55.873449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.873457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.873461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.873472] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:11.566 [2024-12-10 00:11:55.873478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.566 [2024-12-10 00:11:55.873518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:11.566 [2024-12-10 00:11:55.873529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.566 [2024-12-10 00:11:55.873592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.566 [2024-12-10 00:11:55.873599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.566 [2024-12-10 00:11:55.873603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.566 [2024-12-10 00:11:55.873608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.566 [2024-12-10 00:11:55.873659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:11.566 [2024-12-10 00:11:55.873678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.873690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.873701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.567 [2024-12-10 00:11:55.873777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.567 [2024-12-10 00:11:55.873784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.567 [2024-12-10 00:11:55.873791] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873795] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=4096, cccid=4 00:30:11.567 [2024-12-10 00:11:55.873801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9700) on tqpair(0x647690): expected_datao=0, payload_size=4096 00:30:11.567 [2024-12-10 00:11:55.873807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873813] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.873838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.873842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.873857] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:11.567 [2024-12-10 00:11:55.873873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.873884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.873891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.873896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.873903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.873915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.567 [2024-12-10 00:11:55.874004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.567 [2024-12-10 00:11:55.874011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.567 [2024-12-10 00:11:55.874015] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=4096, cccid=4 00:30:11.567 [2024-12-10 00:11:55.874025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9700) on tqpair(0x647690): expected_datao=0, payload_size=4096 00:30:11.567 [2024-12-10 00:11:55.874031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874037] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.874121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.567 [2024-12-10 00:11:55.874195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.567 [2024-12-10 00:11:55.874202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.567 [2024-12-10 00:11:55.874206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874211] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=4096, cccid=4 00:30:11.567 [2024-12-10 00:11:55.874216] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9700) on tqpair(0x647690): expected_datao=0, payload_size=4096 00:30:11.567 [2024-12-10 00:11:55.874222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874233] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874312] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:11.567 [2024-12-10 00:11:55.874318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:11.567 [2024-12-10 00:11:55.874324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:11.567 [2024-12-10 00:11:55.874338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.874358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.567 [2024-12-10 00:11:55.874387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.567 [2024-12-10 00:11:55.874393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9880, cid 5, qid 0 00:30:11.567 [2024-12-10 00:11:55.874471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9880) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.874542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9880, cid 5, qid 0 00:30:11.567 [2024-12-10 00:11:55.874607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9880) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.874655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9880, cid 5, qid 0 00:30:11.567 [2024-12-10 00:11:55.874729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9880) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x647690) 00:30:11.567 [2024-12-10 00:11:55.874767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-12-10 00:11:55.874778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9880, cid 5, qid 0 00:30:11.567 [2024-12-10 00:11:55.874844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.567 [2024-12-10 00:11:55.874851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.567 [2024-12-10 00:11:55.874855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.567 [2024-12-10 00:11:55.874860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9880) on tqpair=0x647690 00:30:11.567 [2024-12-10 00:11:55.874874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.874879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x647690) 00:30:11.568 [2024-12-10 00:11:55.874886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.568 [2024-12-10 00:11:55.874894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.874899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x647690) 00:30:11.568 [2024-12-10 00:11:55.874908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.568 [2024-12-10 00:11:55.874916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.874921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x647690) 00:30:11.568 [2024-12-10 00:11:55.874927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.568 [2024-12-10 00:11:55.874935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.874940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x647690) 00:30:11.568 [2024-12-10 00:11:55.874946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.568 [2024-12-10 00:11:55.874959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9880, cid 5, qid 0 00:30:11.568 [2024-12-10 00:11:55.874964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9700, cid 4, qid 0 00:30:11.568 [2024-12-10 00:11:55.874970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9a00, cid 6, qid 0 00:30:11.568 [2024-12-10 00:11:55.874975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9b80, cid 7, qid 0 00:30:11.568 [2024-12-10 00:11:55.875106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.568 [2024-12-10 00:11:55.875113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.568 [2024-12-10 00:11:55.875118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875123] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=8192, cccid=5 00:30:11.568 [2024-12-10 00:11:55.875128] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9880) on tqpair(0x647690): expected_datao=0, payload_size=8192 00:30:11.568 [2024-12-10 00:11:55.875134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875147] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875152] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.568 [2024-12-10 00:11:55.875174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.568 [2024-12-10 00:11:55.875178] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=512, cccid=4 00:30:11.568 [2024-12-10 00:11:55.875188] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9700) on tqpair(0x647690): expected_datao=0, payload_size=512 00:30:11.568 [2024-12-10 00:11:55.875194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875200] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875204] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.568 [2024-12-10 00:11:55.875216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.568 [2024-12-10 00:11:55.875221] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875225] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=512, cccid=6 00:30:11.568 [2024-12-10 00:11:55.875231] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9a00) on tqpair(0x647690): expected_datao=0, payload_size=512 00:30:11.568 [2024-12-10 00:11:55.875236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875242] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875247] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:11.568 [2024-12-10 00:11:55.875261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:11.568 [2024-12-10 00:11:55.875265] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875270] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x647690): datao=0, datal=4096, cccid=7 00:30:11.568 [2024-12-10 00:11:55.875275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a9b80) on tqpair(0x647690): expected_datao=0, payload_size=4096 00:30:11.568 [2024-12-10 00:11:55.875281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.568 [2024-12-10 00:11:55.875307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.568 [2024-12-10 00:11:55.875311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9880) on tqpair=0x647690 00:30:11.568 [2024-12-10 00:11:55.875328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.568 [2024-12-10 00:11:55.875334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.568 [2024-12-10 00:11:55.875338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9700) on tqpair=0x647690 00:30:11.568 [2024-12-10 00:11:55.875354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.568 [2024-12-10 00:11:55.875360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.568 [2024-12-10 00:11:55.875364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9a00) on tqpair=0x647690 00:30:11.568 [2024-12-10 00:11:55.875376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.568 [2024-12-10 00:11:55.875382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.568 [2024-12-10 00:11:55.875387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.568 [2024-12-10 00:11:55.875391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9b80) on tqpair=0x647690 00:30:11.568 ===================================================== 00:30:11.568 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.568 ===================================================== 00:30:11.568 Controller Capabilities/Features 00:30:11.568 ================================ 00:30:11.568 Vendor ID: 8086 00:30:11.568 Subsystem Vendor ID: 8086 00:30:11.568 Serial Number: SPDK00000000000001 00:30:11.568 Model Number: SPDK bdev Controller 00:30:11.568 Firmware Version: 25.01 00:30:11.568 Recommended Arb Burst: 6 00:30:11.568 IEEE OUI Identifier: e4 d2 5c 00:30:11.568 Multi-path I/O 00:30:11.568 May have multiple subsystem ports: Yes 00:30:11.568 May have multiple controllers: Yes 00:30:11.568 Associated with SR-IOV VF: No 00:30:11.568 Max Data Transfer Size: 131072 00:30:11.568 Max Number of Namespaces: 32 00:30:11.568 Max Number of I/O Queues: 127 00:30:11.568 NVMe Specification Version (VS): 1.3 00:30:11.568 NVMe Specification Version (Identify): 1.3 00:30:11.568 Maximum Queue Entries: 128 00:30:11.568 Contiguous Queues Required: Yes 00:30:11.568 Arbitration Mechanisms Supported 00:30:11.568 Weighted Round Robin: Not Supported 00:30:11.568 Vendor Specific: Not Supported 00:30:11.568 Reset Timeout: 15000 ms 00:30:11.568 Doorbell Stride: 4 bytes 00:30:11.568 NVM Subsystem Reset: Not Supported 00:30:11.568 Command Sets Supported 00:30:11.568 NVM Command Set: Supported 00:30:11.568 Boot Partition: Not Supported 00:30:11.568 Memory Page Size Minimum: 4096 bytes 00:30:11.568 Memory Page Size Maximum: 4096 bytes 00:30:11.568 Persistent Memory Region: Not Supported 00:30:11.568 Optional Asynchronous Events Supported 00:30:11.568 Namespace Attribute Notices: Supported 00:30:11.568 Firmware Activation Notices: Not Supported 00:30:11.568 ANA Change Notices: Not Supported 00:30:11.568 PLE Aggregate Log Change Notices: Not Supported 00:30:11.568 LBA Status Info Alert Notices: Not Supported 00:30:11.568 EGE Aggregate Log Change Notices: Not Supported 00:30:11.568 Normal NVM Subsystem Shutdown event: Not Supported 00:30:11.568 Zone Descriptor Change Notices: Not Supported 00:30:11.568 Discovery Log Change Notices: Not Supported 00:30:11.568 Controller Attributes 00:30:11.568 128-bit Host Identifier: Supported 00:30:11.568 Non-Operational Permissive Mode: Not Supported 00:30:11.568 NVM Sets: Not Supported 00:30:11.568 Read Recovery Levels: Not Supported 00:30:11.568 Endurance Groups: Not Supported 00:30:11.568 Predictable Latency Mode: Not Supported 00:30:11.568 Traffic Based Keep ALive: Not Supported 00:30:11.568 Namespace Granularity: Not Supported 00:30:11.568 SQ Associations: Not Supported 00:30:11.568 UUID List: Not Supported 00:30:11.568 Multi-Domain Subsystem: Not Supported 00:30:11.568 Fixed Capacity Management: Not Supported 00:30:11.568 Variable Capacity Management: Not Supported 00:30:11.568 Delete Endurance Group: Not Supported 00:30:11.568 Delete NVM Set: Not Supported 00:30:11.568 Extended LBA Formats Supported: Not Supported 00:30:11.568 Flexible Data Placement Supported: Not Supported 00:30:11.568 00:30:11.568 Controller Memory Buffer Support 00:30:11.568 ================================ 00:30:11.568 Supported: No 00:30:11.568 00:30:11.568 Persistent Memory Region Support 00:30:11.568 ================================ 00:30:11.568 Supported: No 00:30:11.568 00:30:11.568 Admin Command Set Attributes 00:30:11.568 ============================ 00:30:11.568 Security Send/Receive: Not Supported 00:30:11.568 Format NVM: Not Supported 00:30:11.568 Firmware Activate/Download: Not Supported 00:30:11.568 Namespace Management: Not Supported 00:30:11.568 Device Self-Test: Not Supported 00:30:11.568 Directives: Not Supported 00:30:11.568 NVMe-MI: Not Supported 00:30:11.568 Virtualization Management: Not Supported 00:30:11.569 Doorbell Buffer Config: Not Supported 00:30:11.569 Get LBA Status Capability: Not Supported 00:30:11.569 Command & Feature Lockdown Capability: Not Supported 00:30:11.569 Abort Command Limit: 4 00:30:11.569 Async Event Request Limit: 4 00:30:11.569 Number of Firmware Slots: N/A 00:30:11.569 Firmware Slot 1 Read-Only: N/A 00:30:11.569 Firmware Activation Without Reset: N/A 00:30:11.569 Multiple Update Detection Support: N/A 00:30:11.569 Firmware Update Granularity: No Information Provided 00:30:11.569 Per-Namespace SMART Log: No 00:30:11.569 Asymmetric Namespace Access Log Page: Not Supported 00:30:11.569 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:11.569 Command Effects Log Page: Supported 00:30:11.569 Get Log Page Extended Data: Supported 00:30:11.569 Telemetry Log Pages: Not Supported 00:30:11.569 Persistent Event Log Pages: Not Supported 00:30:11.569 Supported Log Pages Log Page: May Support 00:30:11.569 Commands Supported & Effects Log Page: Not Supported 00:30:11.569 Feature Identifiers & Effects Log Page:May Support 00:30:11.569 NVMe-MI Commands & Effects Log Page: May Support 00:30:11.569 Data Area 4 for Telemetry Log: Not Supported 00:30:11.569 Error Log Page Entries Supported: 128 00:30:11.569 Keep Alive: Supported 00:30:11.569 Keep Alive Granularity: 10000 ms 00:30:11.569 00:30:11.569 NVM Command Set Attributes 00:30:11.569 ========================== 00:30:11.569 Submission Queue Entry Size 00:30:11.569 Max: 64 00:30:11.569 Min: 64 00:30:11.569 Completion Queue Entry Size 00:30:11.569 Max: 16 00:30:11.569 Min: 16 00:30:11.569 Number of Namespaces: 32 00:30:11.569 Compare Command: Supported 00:30:11.569 Write Uncorrectable Command: Not Supported 00:30:11.569 Dataset Management Command: Supported 00:30:11.569 Write Zeroes Command: Supported 00:30:11.569 Set Features Save Field: Not Supported 00:30:11.569 Reservations: Supported 00:30:11.569 Timestamp: Not Supported 00:30:11.569 Copy: Supported 00:30:11.569 Volatile Write Cache: Present 00:30:11.569 Atomic Write Unit (Normal): 1 00:30:11.569 Atomic Write Unit (PFail): 1 00:30:11.569 Atomic Compare & Write Unit: 1 00:30:11.569 Fused Compare & Write: Supported 00:30:11.569 Scatter-Gather List 00:30:11.569 SGL Command Set: Supported 00:30:11.569 SGL Keyed: Supported 00:30:11.569 SGL Bit Bucket Descriptor: Not Supported 00:30:11.569 SGL Metadata Pointer: Not Supported 00:30:11.569 Oversized SGL: Not Supported 00:30:11.569 SGL Metadata Address: Not Supported 00:30:11.569 SGL Offset: Supported 00:30:11.569 Transport SGL Data Block: Not Supported 00:30:11.569 Replay Protected Memory Block: Not Supported 00:30:11.569 00:30:11.569 Firmware Slot Information 00:30:11.569 ========================= 00:30:11.569 Active slot: 1 00:30:11.569 Slot 1 Firmware Revision: 25.01 00:30:11.569 00:30:11.569 00:30:11.569 Commands Supported and Effects 00:30:11.569 ============================== 00:30:11.569 Admin Commands 00:30:11.569 -------------- 00:30:11.569 Get Log Page (02h): Supported 00:30:11.569 Identify (06h): Supported 00:30:11.569 Abort (08h): Supported 00:30:11.569 Set Features (09h): Supported 00:30:11.569 Get Features (0Ah): Supported 00:30:11.569 Asynchronous Event Request (0Ch): Supported 00:30:11.569 Keep Alive (18h): Supported 00:30:11.569 I/O Commands 00:30:11.569 ------------ 00:30:11.569 Flush (00h): Supported LBA-Change 00:30:11.569 Write (01h): Supported LBA-Change 00:30:11.569 Read (02h): Supported 00:30:11.569 Compare (05h): Supported 00:30:11.569 Write Zeroes (08h): Supported LBA-Change 00:30:11.569 Dataset Management (09h): Supported LBA-Change 00:30:11.569 Copy (19h): Supported LBA-Change 00:30:11.569 00:30:11.569 Error Log 00:30:11.569 ========= 00:30:11.569 00:30:11.569 Arbitration 00:30:11.569 =========== 00:30:11.569 Arbitration Burst: 1 00:30:11.569 00:30:11.569 Power Management 00:30:11.569 ================ 00:30:11.569 Number of Power States: 1 00:30:11.569 Current Power State: Power State #0 00:30:11.569 Power State #0: 00:30:11.569 Max Power: 0.00 W 00:30:11.569 Non-Operational State: Operational 00:30:11.569 Entry Latency: Not Reported 00:30:11.569 Exit Latency: Not Reported 00:30:11.569 Relative Read Throughput: 0 00:30:11.569 Relative Read Latency: 0 00:30:11.569 Relative Write Throughput: 0 00:30:11.569 Relative Write Latency: 0 00:30:11.569 Idle Power: Not Reported 00:30:11.569 Active Power: Not Reported 00:30:11.569 Non-Operational Permissive Mode: Not Supported 00:30:11.569 00:30:11.569 Health Information 00:30:11.569 ================== 00:30:11.569 Critical Warnings: 00:30:11.569 Available Spare Space: OK 00:30:11.569 Temperature: OK 00:30:11.569 Device Reliability: OK 00:30:11.569 Read Only: No 00:30:11.569 Volatile Memory Backup: OK 00:30:11.569 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:11.569 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:11.569 Available Spare: 0% 00:30:11.569 Available Spare Threshold: 0% 00:30:11.569 Life Percentage Used:[2024-12-10 00:11:55.875472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x647690) 00:30:11.569 [2024-12-10 00:11:55.875485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.569 [2024-12-10 00:11:55.875498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9b80, cid 7, qid 0 00:30:11.569 [2024-12-10 00:11:55.875573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.569 [2024-12-10 00:11:55.875579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.569 [2024-12-10 00:11:55.875584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9b80) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875619] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:11.569 [2024-12-10 00:11:55.875631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9100) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.569 [2024-12-10 00:11:55.875644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9280) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.569 [2024-12-10 00:11:55.875659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9400) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.569 [2024-12-10 00:11:55.875671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9580) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.569 [2024-12-10 00:11:55.875684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x647690) 00:30:11.569 [2024-12-10 00:11:55.875700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.569 [2024-12-10 00:11:55.875713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9580, cid 3, qid 0 00:30:11.569 [2024-12-10 00:11:55.875773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.569 [2024-12-10 00:11:55.875780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.569 [2024-12-10 00:11:55.875784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9580) on tqpair=0x647690 00:30:11.569 [2024-12-10 00:11:55.875796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.569 [2024-12-10 00:11:55.875805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x647690) 00:30:11.569 [2024-12-10 00:11:55.875811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.569 [2024-12-10 00:11:55.879830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9580, cid 3, qid 0 00:30:11.570 [2024-12-10 00:11:55.879843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.570 [2024-12-10 00:11:55.879849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.570 [2024-12-10 00:11:55.879854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.570 [2024-12-10 00:11:55.879858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9580) on tqpair=0x647690 00:30:11.570 [2024-12-10 00:11:55.879864] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:11.570 [2024-12-10 00:11:55.879870] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:11.570 [2024-12-10 00:11:55.879881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:11.570 [2024-12-10 00:11:55.879885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:11.570 [2024-12-10 00:11:55.879890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x647690) 00:30:11.570 [2024-12-10 00:11:55.879897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.570 [2024-12-10 00:11:55.879909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a9580, cid 3, qid 0 00:30:11.570 [2024-12-10 00:11:55.879986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:11.570 [2024-12-10 00:11:55.879993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:11.570 [2024-12-10 00:11:55.879997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:11.570 [2024-12-10 00:11:55.880002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a9580) on tqpair=0x647690 00:30:11.570 [2024-12-10 00:11:55.880010] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:11.570 0% 00:30:11.570 Data Units Read: 0 00:30:11.570 Data Units Written: 0 00:30:11.570 Host Read Commands: 0 00:30:11.570 Host Write Commands: 0 00:30:11.570 Controller Busy Time: 0 minutes 00:30:11.570 Power Cycles: 0 00:30:11.570 Power On Hours: 0 hours 00:30:11.570 Unsafe Shutdowns: 0 00:30:11.570 Unrecoverable Media Errors: 0 00:30:11.570 Lifetime Error Log Entries: 0 00:30:11.570 Warning Temperature Time: 0 minutes 00:30:11.570 Critical Temperature Time: 0 minutes 00:30:11.570 00:30:11.570 Number of Queues 00:30:11.570 ================ 00:30:11.570 Number of I/O Submission Queues: 127 00:30:11.570 Number of I/O Completion Queues: 127 00:30:11.570 00:30:11.570 Active Namespaces 00:30:11.570 ================= 00:30:11.570 Namespace ID:1 00:30:11.570 Error Recovery Timeout: Unlimited 00:30:11.570 Command Set Identifier: NVM (00h) 00:30:11.570 Deallocate: Supported 00:30:11.570 Deallocated/Unwritten Error: Not Supported 00:30:11.570 Deallocated Read Value: Unknown 00:30:11.570 Deallocate in Write Zeroes: Not Supported 00:30:11.570 Deallocated Guard Field: 0xFFFF 00:30:11.570 Flush: Supported 00:30:11.570 Reservation: Supported 00:30:11.570 Namespace Sharing Capabilities: Multiple Controllers 00:30:11.570 Size (in LBAs): 131072 (0GiB) 00:30:11.570 Capacity (in LBAs): 131072 (0GiB) 00:30:11.570 Utilization (in LBAs): 131072 (0GiB) 00:30:11.570 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:11.570 EUI64: ABCDEF0123456789 00:30:11.570 UUID: b28da48b-1b99-4f71-a44c-cb773438733e 00:30:11.570 Thin Provisioning: Not Supported 00:30:11.570 Per-NS Atomic Units: Yes 00:30:11.570 Atomic Boundary Size (Normal): 0 00:30:11.570 Atomic Boundary Size (PFail): 0 00:30:11.570 Atomic Boundary Offset: 0 00:30:11.570 Maximum Single Source Range Length: 65535 00:30:11.570 Maximum Copy Length: 65535 00:30:11.570 Maximum Source Range Count: 1 00:30:11.570 NGUID/EUI64 Never Reused: No 00:30:11.570 Namespace Write Protected: No 00:30:11.570 Number of LBA Formats: 1 00:30:11.570 Current LBA Format: LBA Format #00 00:30:11.570 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:11.570 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.570 rmmod nvme_tcp 00:30:11.570 rmmod nvme_fabrics 00:30:11.570 rmmod nvme_keyring 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 510630 ']' 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 510630 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 510630 ']' 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 510630 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.570 00:11:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510630 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510630' 00:30:11.829 killing process with pid 510630 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 510630 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 510630 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.829 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.366 00:30:14.366 real 0m11.379s 00:30:14.366 user 0m8.261s 00:30:14.366 sys 0m6.154s 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.366 ************************************ 00:30:14.366 END TEST nvmf_identify 00:30:14.366 ************************************ 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.366 ************************************ 00:30:14.366 START TEST nvmf_perf 00:30:14.366 ************************************ 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:14.366 * Looking for test storage... 00:30:14.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.366 --rc genhtml_branch_coverage=1 00:30:14.366 --rc genhtml_function_coverage=1 00:30:14.366 --rc genhtml_legend=1 00:30:14.366 --rc geninfo_all_blocks=1 00:30:14.366 --rc geninfo_unexecuted_blocks=1 00:30:14.366 00:30:14.366 ' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.366 --rc genhtml_branch_coverage=1 00:30:14.366 --rc genhtml_function_coverage=1 00:30:14.366 --rc genhtml_legend=1 00:30:14.366 --rc geninfo_all_blocks=1 00:30:14.366 --rc geninfo_unexecuted_blocks=1 00:30:14.366 00:30:14.366 ' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.366 --rc genhtml_branch_coverage=1 00:30:14.366 --rc genhtml_function_coverage=1 00:30:14.366 --rc genhtml_legend=1 00:30:14.366 --rc geninfo_all_blocks=1 00:30:14.366 --rc geninfo_unexecuted_blocks=1 00:30:14.366 00:30:14.366 ' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.366 --rc genhtml_branch_coverage=1 00:30:14.366 --rc genhtml_function_coverage=1 00:30:14.366 --rc genhtml_legend=1 00:30:14.366 --rc geninfo_all_blocks=1 00:30:14.366 --rc geninfo_unexecuted_blocks=1 00:30:14.366 00:30:14.366 ' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.366 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.367 00:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.488 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:22.489 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:22.489 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:22.489 Found net devices under 0000:af:00.0: cvl_0_0 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:22.489 Found net devices under 0000:af:00.1: cvl_0_1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:30:22.489 00:30:22.489 --- 10.0.0.2 ping statistics --- 00:30:22.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.489 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:22.489 00:30:22.489 --- 10.0.0.1 ping statistics --- 00:30:22.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.489 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=515165 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 515165 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 515165 ']' 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.489 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.490 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.490 00:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:22.490 [2024-12-10 00:12:06.025700] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:30:22.490 [2024-12-10 00:12:06.025748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.490 [2024-12-10 00:12:06.121993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.490 [2024-12-10 00:12:06.166322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.490 [2024-12-10 00:12:06.166358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.490 [2024-12-10 00:12:06.166368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.490 [2024-12-10 00:12:06.166377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.490 [2024-12-10 00:12:06.166384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.490 [2024-12-10 00:12:06.167986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.490 [2024-12-10 00:12:06.168094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.490 [2024-12-10 00:12:06.168202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.490 [2024-12-10 00:12:06.168204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:22.490 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:25.768 00:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:25.768 00:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:25.768 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:30:25.768 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.026 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:26.026 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:30:26.026 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:26.026 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:26.026 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:26.283 [2024-12-10 00:12:10.572148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.283 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.542 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:26.542 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.542 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:26.542 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:26.799 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.057 [2024-12-10 00:12:11.338973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.057 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.314 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:30:27.314 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:27.314 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:27.315 00:12:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:28.692 Initializing NVMe Controllers 00:30:28.692 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:30:28.692 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:30:28.692 Initialization complete. Launching workers. 00:30:28.692 ======================================================== 00:30:28.692 Latency(us) 00:30:28.692 Device Information : IOPS MiB/s Average min max 00:30:28.692 PCIE (0000:d8:00.0) NSID 1 from core 0: 100812.10 393.80 316.85 34.00 4503.71 00:30:28.692 ======================================================== 00:30:28.692 Total : 100812.10 393.80 316.85 34.00 4503.71 00:30:28.692 00:30:28.692 00:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.071 Initializing NVMe Controllers 00:30:30.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:30.072 Initialization complete. Launching workers. 00:30:30.072 ======================================================== 00:30:30.072 Latency(us) 00:30:30.072 Device Information : IOPS MiB/s Average min max 00:30:30.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9872.34 101.86 45660.46 00:30:30.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23714.25 7963.17 47886.39 00:30:30.072 ======================================================== 00:30:30.072 Total : 147.00 0.57 13921.33 101.86 47886.39 00:30:30.072 00:30:30.072 00:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.454 Initializing NVMe Controllers 00:30:31.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:31.454 Initialization complete. Launching workers. 00:30:31.454 ======================================================== 00:30:31.454 Latency(us) 00:30:31.454 Device Information : IOPS MiB/s Average min max 00:30:31.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11650.00 45.51 2747.29 433.30 6384.29 00:30:31.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3852.00 15.05 8350.41 7147.24 16163.33 00:30:31.454 ======================================================== 00:30:31.454 Total : 15502.00 60.55 4139.58 433.30 16163.33 00:30:31.454 00:30:31.454 00:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:31.454 00:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:31.454 00:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.990 Initializing NVMe Controllers 00:30:33.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.990 Controller IO queue size 128, less than required. 00:30:33.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:33.990 Controller IO queue size 128, less than required. 00:30:33.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:33.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:33.990 Initialization complete. Launching workers. 00:30:33.990 ======================================================== 00:30:33.990 Latency(us) 00:30:33.990 Device Information : IOPS MiB/s Average min max 00:30:33.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1711.99 428.00 75531.15 47277.37 128230.07 00:30:33.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.00 152.25 219935.38 78747.40 322933.99 00:30:33.990 ======================================================== 00:30:33.990 Total : 2320.98 580.25 113420.93 47277.37 322933.99 00:30:33.990 00:30:33.990 00:12:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:34.249 No valid NVMe controllers or AIO or URING devices found 00:30:34.249 Initializing NVMe Controllers 00:30:34.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.249 Controller IO queue size 128, less than required. 00:30:34.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:34.249 Controller IO queue size 128, less than required. 00:30:34.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.249 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:34.249 WARNING: Some requested NVMe devices were skipped 00:30:34.249 00:12:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:36.807 Initializing NVMe Controllers 00:30:36.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.807 Controller IO queue size 128, less than required. 00:30:36.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.807 Controller IO queue size 128, less than required. 00:30:36.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:36.807 Initialization complete. Launching workers. 00:30:36.807 00:30:36.807 ==================== 00:30:36.807 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:36.807 TCP transport: 00:30:36.807 polls: 14206 00:30:36.807 idle_polls: 10327 00:30:36.807 sock_completions: 3879 00:30:36.807 nvme_completions: 6067 00:30:36.807 submitted_requests: 9092 00:30:36.807 queued_requests: 1 00:30:36.807 00:30:36.807 ==================== 00:30:36.807 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:36.807 TCP transport: 00:30:36.807 polls: 11250 00:30:36.807 idle_polls: 6880 00:30:36.807 sock_completions: 4370 00:30:36.807 nvme_completions: 6811 00:30:36.807 submitted_requests: 10142 00:30:36.807 queued_requests: 1 00:30:36.807 ======================================================== 00:30:36.807 Latency(us) 00:30:36.807 Device Information : IOPS MiB/s Average min max 00:30:36.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1513.91 378.48 86686.69 61080.55 136967.02 00:30:36.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1699.59 424.90 75626.76 42403.48 105404.65 00:30:36.807 ======================================================== 00:30:36.807 Total : 3213.50 803.38 80837.19 42403.48 136967.02 00:30:36.807 00:30:36.807 00:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:36.807 00:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.807 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:30:36.807 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:36.807 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:36.807 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.807 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.808 rmmod nvme_tcp 00:30:36.808 rmmod nvme_fabrics 00:30:36.808 rmmod nvme_keyring 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 515165 ']' 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 515165 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 515165 ']' 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 515165 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.808 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515165 00:30:37.067 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.067 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.067 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515165' 00:30:37.067 killing process with pid 515165 00:30:37.067 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 515165 00:30:37.067 00:12:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 515165 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.085 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.120 00:30:41.120 real 0m27.062s 00:30:41.120 user 1m8.567s 00:30:41.120 sys 0m9.868s 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:41.120 ************************************ 00:30:41.120 END TEST nvmf_perf 00:30:41.120 ************************************ 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.120 ************************************ 00:30:41.120 START TEST nvmf_fio_host 00:30:41.120 ************************************ 00:30:41.120 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:41.384 * Looking for test storage... 00:30:41.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.384 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.385 --rc genhtml_branch_coverage=1 00:30:41.385 --rc genhtml_function_coverage=1 00:30:41.385 --rc genhtml_legend=1 00:30:41.385 --rc geninfo_all_blocks=1 00:30:41.385 --rc geninfo_unexecuted_blocks=1 00:30:41.385 00:30:41.385 ' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.385 --rc genhtml_branch_coverage=1 00:30:41.385 --rc genhtml_function_coverage=1 00:30:41.385 --rc genhtml_legend=1 00:30:41.385 --rc geninfo_all_blocks=1 00:30:41.385 --rc geninfo_unexecuted_blocks=1 00:30:41.385 00:30:41.385 ' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.385 --rc genhtml_branch_coverage=1 00:30:41.385 --rc genhtml_function_coverage=1 00:30:41.385 --rc genhtml_legend=1 00:30:41.385 --rc geninfo_all_blocks=1 00:30:41.385 --rc geninfo_unexecuted_blocks=1 00:30:41.385 00:30:41.385 ' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.385 --rc genhtml_branch_coverage=1 00:30:41.385 --rc genhtml_function_coverage=1 00:30:41.385 --rc genhtml_legend=1 00:30:41.385 --rc geninfo_all_blocks=1 00:30:41.385 --rc geninfo_unexecuted_blocks=1 00:30:41.385 00:30:41.385 ' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:41.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:41.385 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.386 00:12:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.512 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.512 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.512 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.512 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.512 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:49.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:49.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:49.513 Found net devices under 0000:af:00.0: cvl_0_0 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:49.513 Found net devices under 0000:af:00.1: cvl_0_1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.513 00:12:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:30:49.513 00:30:49.513 --- 10.0.0.2 ping statistics --- 00:30:49.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.513 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:30:49.513 00:30:49.513 --- 10.0.0.1 ping statistics --- 00:30:49.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.513 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.513 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=521770 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 521770 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 521770 ']' 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.514 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.514 [2024-12-10 00:12:33.159512] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:30:49.514 [2024-12-10 00:12:33.159573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.514 [2024-12-10 00:12:33.256461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.514 [2024-12-10 00:12:33.295168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.514 [2024-12-10 00:12:33.295208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.514 [2024-12-10 00:12:33.295217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.514 [2024-12-10 00:12:33.295225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.514 [2024-12-10 00:12:33.295249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.514 [2024-12-10 00:12:33.296938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.514 [2024-12-10 00:12:33.297050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.514 [2024-12-10 00:12:33.297158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.514 [2024-12-10 00:12:33.297159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.773 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.773 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:30:49.773 00:12:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:49.773 [2024-12-10 00:12:34.168795] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.773 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:49.773 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.773 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.032 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:50.032 Malloc1 00:30:50.032 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.290 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:50.549 00:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.808 [2024-12-10 00:12:35.051633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:50.808 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:51.094 00:12:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:51.365 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:51.365 fio-3.35 00:30:51.365 Starting 1 thread 00:30:53.900 00:30:53.900 test: (groupid=0, jobs=1): err= 0: pid=522325: Tue Dec 10 00:12:38 2024 00:30:53.900 read: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(96.4MiB/2005msec) 00:30:53.900 slat (nsec): min=1490, max=242990, avg=1638.33, stdev=2196.84 00:30:53.900 clat (usec): min=3095, max=10411, avg=5751.03, stdev=429.53 00:30:53.900 lat (usec): min=3134, max=10413, avg=5752.67, stdev=429.48 00:30:53.900 clat percentiles (usec): 00:30:53.900 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:30:53.900 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:30:53.900 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:30:53.900 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 8094], 99.95th=[ 9503], 00:30:53.900 | 99.99th=[10028] 00:30:53.900 bw ( KiB/s): min=48592, max=49776, per=99.97%, avg=49234.00, stdev=557.19, samples=4 00:30:53.900 iops : min=12148, max=12444, avg=12308.50, stdev=139.30, samples=4 00:30:53.900 write: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(96.2MiB/2005msec); 0 zone resets 00:30:53.900 slat (nsec): min=1533, max=227628, avg=1696.08, stdev=1631.89 00:30:53.900 clat (usec): min=2444, max=8968, avg=4634.07, stdev=357.13 00:30:53.900 lat (usec): min=2459, max=8969, avg=4635.77, stdev=357.20 00:30:53.900 clat percentiles (usec): 00:30:53.900 | 1.00th=[ 3785], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:30:53.900 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:30:53.900 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5080], 95.00th=[ 5145], 00:30:53.900 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 6915], 99.95th=[ 7570], 00:30:53.900 | 99.99th=[ 8979] 00:30:53.900 bw ( KiB/s): min=48896, max=49384, per=100.00%, avg=49120.00, stdev=248.64, samples=4 00:30:53.900 iops : min=12224, max=12348, avg=12280.00, stdev=63.41, samples=4 00:30:53.900 lat (msec) : 4=1.63%, 10=98.36%, 20=0.01% 00:30:53.900 cpu : usr=69.71%, sys=29.29%, ctx=81, majf=0, minf=2 00:30:53.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:53.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:53.900 issued rwts: total=24686,24618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:53.900 00:30:53.900 Run status group 0 (all jobs): 00:30:53.900 READ: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=96.4MiB (101MB), run=2005-2005msec 00:30:53.900 WRITE: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=96.2MiB (101MB), run=2005-2005msec 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:53.900 00:12:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:54.158 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:54.158 fio-3.35 00:30:54.158 Starting 1 thread 00:30:56.694 00:30:56.694 test: (groupid=0, jobs=1): err= 0: pid=522906: Tue Dec 10 00:12:40 2024 00:30:56.694 read: IOPS=11.4k, BW=178MiB/s (187MB/s)(357MiB/2005msec) 00:30:56.694 slat (nsec): min=2420, max=85456, avg=2652.18, stdev=1096.71 00:30:56.694 clat (usec): min=1690, max=12645, avg=6428.57, stdev=1501.63 00:30:56.694 lat (usec): min=1692, max=12648, avg=6431.22, stdev=1501.71 00:30:56.694 clat percentiles (usec): 00:30:56.694 | 1.00th=[ 3458], 5.00th=[ 4047], 10.00th=[ 4490], 20.00th=[ 5080], 00:30:56.694 | 30.00th=[ 5538], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 6915], 00:30:56.694 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 8291], 95.00th=[ 8848], 00:30:56.694 | 99.00th=[10159], 99.50th=[10552], 99.90th=[11731], 99.95th=[12125], 00:30:56.694 | 99.99th=[12649] 00:30:56.694 bw ( KiB/s): min=87584, max=95552, per=50.80%, avg=92752.00, stdev=3530.51, samples=4 00:30:56.694 iops : min= 5474, max= 5972, avg=5797.00, stdev=220.66, samples=4 00:30:56.694 write: IOPS=6706, BW=105MiB/s (110MB/s)(189MiB/1808msec); 0 zone resets 00:30:56.694 slat (usec): min=28, max=256, avg=29.55, stdev= 4.72 00:30:56.694 clat (usec): min=3926, max=14404, avg=8169.20, stdev=1474.48 00:30:56.694 lat (usec): min=3956, max=14433, avg=8198.75, stdev=1475.01 00:30:56.694 clat percentiles (usec): 00:30:56.694 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 6915], 00:30:56.694 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8291], 00:30:56.694 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[10945], 00:30:56.694 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13829], 99.95th=[14091], 00:30:56.694 | 99.99th=[14353] 00:30:56.694 bw ( KiB/s): min=91648, max=98464, per=89.98%, avg=96560.00, stdev=3285.94, samples=4 00:30:56.694 iops : min= 5728, max= 6154, avg=6035.00, stdev=205.37, samples=4 00:30:56.694 lat (msec) : 2=0.03%, 4=2.97%, 10=91.62%, 20=5.38% 00:30:56.694 cpu : usr=84.13%, sys=15.07%, ctx=41, majf=0, minf=2 00:30:56.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:30:56.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:56.694 issued rwts: total=22878,12126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:56.694 00:30:56.694 Run status group 0 (all jobs): 00:30:56.694 READ: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=357MiB (375MB), run=2005-2005msec 00:30:56.694 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=189MiB (199MB), run=1808-1808msec 00:30:56.694 00:12:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.953 rmmod nvme_tcp 00:30:56.953 rmmod nvme_fabrics 00:30:56.953 rmmod nvme_keyring 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 521770 ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 521770 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 521770 ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 521770 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521770 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521770' 00:30:56.953 killing process with pid 521770 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 521770 00:30:56.953 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 521770 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.213 00:12:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.753 00:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.754 00:30:59.754 real 0m18.039s 00:30:59.754 user 0m56.584s 00:30:59.754 sys 0m8.191s 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.754 ************************************ 00:30:59.754 END TEST nvmf_fio_host 00:30:59.754 ************************************ 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.754 ************************************ 00:30:59.754 START TEST nvmf_failover 00:30:59.754 ************************************ 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:59.754 * Looking for test storage... 00:30:59.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:59.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.754 --rc genhtml_branch_coverage=1 00:30:59.754 --rc genhtml_function_coverage=1 00:30:59.754 --rc genhtml_legend=1 00:30:59.754 --rc geninfo_all_blocks=1 00:30:59.754 --rc geninfo_unexecuted_blocks=1 00:30:59.754 00:30:59.754 ' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:59.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.754 --rc genhtml_branch_coverage=1 00:30:59.754 --rc genhtml_function_coverage=1 00:30:59.754 --rc genhtml_legend=1 00:30:59.754 --rc geninfo_all_blocks=1 00:30:59.754 --rc geninfo_unexecuted_blocks=1 00:30:59.754 00:30:59.754 ' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:59.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.754 --rc genhtml_branch_coverage=1 00:30:59.754 --rc genhtml_function_coverage=1 00:30:59.754 --rc genhtml_legend=1 00:30:59.754 --rc geninfo_all_blocks=1 00:30:59.754 --rc geninfo_unexecuted_blocks=1 00:30:59.754 00:30:59.754 ' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:59.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.754 --rc genhtml_branch_coverage=1 00:30:59.754 --rc genhtml_function_coverage=1 00:30:59.754 --rc genhtml_legend=1 00:30:59.754 --rc geninfo_all_blocks=1 00:30:59.754 --rc geninfo_unexecuted_blocks=1 00:30:59.754 00:30:59.754 ' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.754 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:59.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.755 00:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:07.904 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:07.904 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:07.904 Found net devices under 0000:af:00.0: cvl_0_0 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.904 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:07.904 Found net devices under 0000:af:00.1: cvl_0_1 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.905 00:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:31:07.905 00:31:07.905 --- 10.0.0.2 ping statistics --- 00:31:07.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.905 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:31:07.905 00:31:07.905 --- 10.0.0.1 ping statistics --- 00:31:07.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.905 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=527103 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 527103 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 527103 ']' 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.905 00:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.905 [2024-12-10 00:12:51.317429] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:31:07.905 [2024-12-10 00:12:51.317476] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.905 [2024-12-10 00:12:51.411928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:07.905 [2024-12-10 00:12:51.455306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.905 [2024-12-10 00:12:51.455341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.905 [2024-12-10 00:12:51.455351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.905 [2024-12-10 00:12:51.455360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.905 [2024-12-10 00:12:51.455367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.905 [2024-12-10 00:12:51.459209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.905 [2024-12-10 00:12:51.459254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.905 [2024-12-10 00:12:51.459255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.905 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:08.165 [2024-12-10 00:12:52.383158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.165 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:08.165 Malloc0 00:31:08.424 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.424 00:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.683 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.943 [2024-12-10 00:12:53.212506] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.943 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.943 [2024-12-10 00:12:53.409051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:09.201 [2024-12-10 00:12:53.601655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=527440 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 527440 /var/tmp/bdevperf.sock 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 527440 ']' 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:09.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.201 00:12:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:10.138 00:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.138 00:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:10.138 00:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:10.707 NVMe0n1 00:31:10.707 00:12:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:10.967 00:31:10.967 00:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=527707 00:31:10.967 00:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:10.967 00:12:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:11.901 00:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.160 [2024-12-10 00:12:56.418022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.160 [2024-12-10 00:12:56.418094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.160 [2024-12-10 00:12:56.418104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 [2024-12-10 00:12:56.418519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772010 is same with the state(6) to be set 00:31:12.161 00:12:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:15.451 00:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:15.451 00:31:15.451 00:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.710 [2024-12-10 00:12:59.938404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 [2024-12-10 00:12:59.938743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772e10 is same with the state(6) to be set 00:31:15.710 00:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:18.998 00:13:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.998 [2024-12-10 00:13:03.149830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.998 00:13:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:19.942 00:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:19.942 [2024-12-10 00:13:04.363750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.942 [2024-12-10 00:13:04.363845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.363996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.943 [2024-12-10 00:13:04.364165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 [2024-12-10 00:13:04.364311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bf1b0 is same with the state(6) to be set 00:31:19.944 00:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 527707 00:31:26.527 { 00:31:26.527 "results": [ 00:31:26.527 { 00:31:26.527 "job": "NVMe0n1", 00:31:26.527 "core_mask": "0x1", 00:31:26.527 "workload": "verify", 00:31:26.527 "status": "finished", 00:31:26.527 "verify_range": { 00:31:26.527 "start": 0, 00:31:26.527 "length": 16384 00:31:26.527 }, 00:31:26.527 "queue_depth": 128, 00:31:26.527 "io_size": 4096, 00:31:26.527 "runtime": 15.011132, 00:31:26.527 "iops": 11403.936758400367, 00:31:26.527 "mibps": 44.54662796250143, 00:31:26.527 "io_failed": 18037, 00:31:26.527 "io_timeout": 0, 00:31:26.527 "avg_latency_us": 10133.12316159452, 00:31:26.527 "min_latency_us": 404.6848, 00:31:26.527 "max_latency_us": 21705.5232 00:31:26.527 } 00:31:26.527 ], 00:31:26.527 "core_count": 1 00:31:26.527 } 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 527440 ']' 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527440' 00:31:26.527 killing process with pid 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 527440 00:31:26.527 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:26.527 [2024-12-10 00:12:53.680962] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:31:26.527 [2024-12-10 00:12:53.681022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527440 ] 00:31:26.527 [2024-12-10 00:12:53.771747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.527 [2024-12-10 00:12:53.811299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.527 Running I/O for 15 seconds... 00:31:26.527 11550.00 IOPS, 45.12 MiB/s [2024-12-09T23:13:11.000Z] [2024-12-10 00:12:56.418986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.527 [2024-12-10 00:12:56.419603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.527 [2024-12-10 00:12:56.419614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.527 [2024-12-10 00:12:56.419623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.419980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.419991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.420010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.420030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.420049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.420069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.528 [2024-12-10 00:12:56.420088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.528 [2024-12-10 00:12:56.420563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.528 [2024-12-10 00:12:56.420572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.420982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.420992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.529 [2024-12-10 00:12:56.421020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103880 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103888 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103896 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103904 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103912 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103920 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103928 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103936 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103944 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103952 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103960 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103368 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103376 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103384 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103392 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103400 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103408 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.421623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.421630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.421638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103416 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.421646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.432457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.529 [2024-12-10 00:12:56.432471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.529 [2024-12-10 00:12:56.432482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103424 len:8 PRP1 0x0 PRP2 0x0 00:31:26.529 [2024-12-10 00:12:56.432494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.529 [2024-12-10 00:12:56.432506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103432 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103440 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103448 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103456 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103464 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.530 [2024-12-10 00:12:56.432731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.530 [2024-12-10 00:12:56.432741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103472 len:8 PRP1 0x0 PRP2 0x0 00:31:26.530 [2024-12-10 00:12:56.432753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432805] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:26.530 [2024-12-10 00:12:56.432840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.530 [2024-12-10 00:12:56.432854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.530 [2024-12-10 00:12:56.432878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.530 [2024-12-10 00:12:56.432903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.530 [2024-12-10 00:12:56.432927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:56.432938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:26.530 [2024-12-10 00:12:56.432987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60550 (9): Bad file descriptor 00:31:26.530 [2024-12-10 00:12:56.436620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:26.530 [2024-12-10 00:12:56.584336] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:26.530 10645.00 IOPS, 41.58 MiB/s [2024-12-09T23:13:11.003Z] 10986.67 IOPS, 42.92 MiB/s [2024-12-09T23:13:11.003Z] 11215.00 IOPS, 43.81 MiB/s [2024-12-09T23:13:11.003Z] [2024-12-10 00:12:59.940829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.530 [2024-12-10 00:12:59.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.940981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.940990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.530 [2024-12-10 00:12:59.941337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.530 [2024-12-10 00:12:59.941347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.941988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.941999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.531 [2024-12-10 00:12:59.942473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.531 [2024-12-10 00:12:59.942482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.532 [2024-12-10 00:12:59.942714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.942971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.942978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.942986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.942995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.532 [2024-12-10 00:12:59.943424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.532 [2024-12-10 00:12:59.943432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.532 [2024-12-10 00:12:59.943439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:31:26.532 [2024-12-10 00:12:59.943448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82248 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82256 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82264 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82272 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.943779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.943786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.943793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82280 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.943802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.955162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.955171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82288 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.955180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.533 [2024-12-10 00:12:59.955196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.533 [2024-12-10 00:12:59.955204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82296 len:8 PRP1 0x0 PRP2 0x0 00:31:26.533 [2024-12-10 00:12:59.955213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955262] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:26.533 [2024-12-10 00:12:59.955287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.533 [2024-12-10 00:12:59.955299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.533 [2024-12-10 00:12:59.955318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.533 [2024-12-10 00:12:59.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.533 [2024-12-10 00:12:59.955357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:12:59.955367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:26.533 [2024-12-10 00:12:59.955401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60550 (9): Bad file descriptor 00:31:26.533 [2024-12-10 00:12:59.958551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:26.533 [2024-12-10 00:13:00.028802] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:26.533 11118.20 IOPS, 43.43 MiB/s [2024-12-09T23:13:11.006Z] 11211.83 IOPS, 43.80 MiB/s [2024-12-09T23:13:11.006Z] 11288.57 IOPS, 44.10 MiB/s [2024-12-09T23:13:11.006Z] 11332.75 IOPS, 44.27 MiB/s [2024-12-09T23:13:11.006Z] 11390.67 IOPS, 44.49 MiB/s [2024-12-09T23:13:11.006Z] [2024-12-10 00:13:04.364725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.364985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.364994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.533 [2024-12-10 00:13:04.365261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.533 [2024-12-10 00:13:04.365270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.534 [2024-12-10 00:13:04.365289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.534 [2024-12-10 00:13:04.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.534 [2024-12-10 00:13:04.365329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.534 [2024-12-10 00:13:04.365349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.365987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.365996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.534 [2024-12-10 00:13:04.366124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.534 [2024-12-10 00:13:04.366137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:26.535 [2024-12-10 00:13:04.366602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.366982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.366993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:26.535 [2024-12-10 00:13:04.367235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.535 [2024-12-10 00:13:04.367256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.536 [2024-12-10 00:13:04.367265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124872 len:8 PRP1 0x0 PRP2 0x0 00:31:26.536 [2024-12-10 00:13:04.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:26.536 [2024-12-10 00:13:04.367296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:26.536 [2024-12-10 00:13:04.367304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124880 len:8 PRP1 0x0 PRP2 0x0 00:31:26.536 [2024-12-10 00:13:04.367314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367364] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:26.536 [2024-12-10 00:13:04.367389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.536 [2024-12-10 00:13:04.367399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.536 [2024-12-10 00:13:04.367418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.536 [2024-12-10 00:13:04.367437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.536 [2024-12-10 00:13:04.367457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.536 [2024-12-10 00:13:04.367466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:26.536 [2024-12-10 00:13:04.367490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60550 (9): Bad file descriptor 00:31:26.536 [2024-12-10 00:13:04.370356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:26.536 [2024-12-10 00:13:04.513752] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:26.536 11260.00 IOPS, 43.98 MiB/s [2024-12-09T23:13:11.009Z] 11289.55 IOPS, 44.10 MiB/s [2024-12-09T23:13:11.009Z] 11326.25 IOPS, 44.24 MiB/s [2024-12-09T23:13:11.009Z] 11353.85 IOPS, 44.35 MiB/s [2024-12-09T23:13:11.009Z] 11380.43 IOPS, 44.45 MiB/s 00:31:26.536 Latency(us) 00:31:26.536 [2024-12-09T23:13:11.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.536 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:26.536 Verification LBA range: start 0x0 length 0x4000 00:31:26.536 NVMe0n1 : 15.01 11403.94 44.55 1201.57 0.00 10133.12 404.68 21705.52 00:31:26.536 [2024-12-09T23:13:11.009Z] =================================================================================================================== 00:31:26.536 [2024-12-09T23:13:11.009Z] Total : 11403.94 44.55 1201.57 0.00 10133.12 404.68 21705.52 00:31:26.536 Received shutdown signal, test time was about 15.000000 seconds 00:31:26.536 00:31:26.536 Latency(us) 00:31:26.536 [2024-12-09T23:13:11.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.536 [2024-12-09T23:13:11.009Z] =================================================================================================================== 00:31:26.536 [2024-12-09T23:13:11.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=530301 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 530301 /var/tmp/bdevperf.sock 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 530301 ']' 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.536 00:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:27.104 00:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.104 00:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:27.104 00:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:27.363 [2024-12-10 00:13:11.683146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:27.363 00:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:27.623 [2024-12-10 00:13:11.867658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:27.623 00:13:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:27.882 NVMe0n1 00:31:27.882 00:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:28.141 00:31:28.141 00:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:28.403 00:31:28.662 00:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:28.662 00:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:28.662 00:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:28.921 00:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:32.221 00:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:32.221 00:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:32.221 00:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.221 00:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=531352 00:31:32.221 00:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 531352 00:31:33.158 { 00:31:33.158 "results": [ 00:31:33.158 { 00:31:33.159 "job": "NVMe0n1", 00:31:33.159 "core_mask": "0x1", 00:31:33.159 "workload": "verify", 00:31:33.159 "status": "finished", 00:31:33.159 "verify_range": { 00:31:33.159 "start": 0, 00:31:33.159 "length": 16384 00:31:33.159 }, 00:31:33.159 "queue_depth": 128, 00:31:33.159 "io_size": 4096, 00:31:33.159 "runtime": 1.007226, 00:31:33.159 "iops": 11547.557350584675, 00:31:33.159 "mibps": 45.10764590072139, 00:31:33.159 "io_failed": 0, 00:31:33.159 "io_timeout": 0, 00:31:33.159 "avg_latency_us": 11031.471865462987, 00:31:33.159 "min_latency_us": 2215.1168, 00:31:33.159 "max_latency_us": 10013.9008 00:31:33.159 } 00:31:33.159 ], 00:31:33.159 "core_count": 1 00:31:33.159 } 00:31:33.159 00:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:33.159 [2024-12-10 00:13:10.664326] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:31:33.159 [2024-12-10 00:13:10.664383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530301 ] 00:31:33.159 [2024-12-10 00:13:10.754133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.159 [2024-12-10 00:13:10.790982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.159 [2024-12-10 00:13:13.273875] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:33.159 [2024-12-10 00:13:13.273921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.159 [2024-12-10 00:13:13.273935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.159 [2024-12-10 00:13:13.273946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.159 [2024-12-10 00:13:13.273955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.159 [2024-12-10 00:13:13.273965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.159 [2024-12-10 00:13:13.273974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.159 [2024-12-10 00:13:13.273983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.159 [2024-12-10 00:13:13.273992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.159 [2024-12-10 00:13:13.274001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:31:33.159 [2024-12-10 00:13:13.274029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:31:33.159 [2024-12-10 00:13:13.274044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f77550 (9): Bad file descriptor 00:31:33.159 [2024-12-10 00:13:13.279021] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:31:33.159 Running I/O for 1 seconds... 00:31:33.159 11470.00 IOPS, 44.80 MiB/s 00:31:33.159 Latency(us) 00:31:33.159 [2024-12-09T23:13:17.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.159 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:33.159 Verification LBA range: start 0x0 length 0x4000 00:31:33.159 NVMe0n1 : 1.01 11547.56 45.11 0.00 0.00 11031.47 2215.12 10013.90 00:31:33.159 [2024-12-09T23:13:17.632Z] =================================================================================================================== 00:31:33.159 [2024-12-09T23:13:17.632Z] Total : 11547.56 45.11 0.00 0.00 11031.47 2215.12 10013.90 00:31:33.159 00:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:33.159 00:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:33.418 00:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:33.677 00:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:33.677 00:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:33.936 00:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:34.196 00:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 530301 ']' 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530301' 00:31:37.499 killing process with pid 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 530301 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:37.499 00:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.759 rmmod nvme_tcp 00:31:37.759 rmmod nvme_fabrics 00:31:37.759 rmmod nvme_keyring 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 527103 ']' 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 527103 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 527103 ']' 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 527103 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527103 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527103' 00:31:37.759 killing process with pid 527103 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 527103 00:31:37.759 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 527103 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.018 00:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.564 00:31:40.564 real 0m40.805s 00:31:40.564 user 2m5.382s 00:31:40.564 sys 0m10.232s 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.564 ************************************ 00:31:40.564 END TEST nvmf_failover 00:31:40.564 ************************************ 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.564 ************************************ 00:31:40.564 START TEST nvmf_host_discovery 00:31:40.564 ************************************ 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:40.564 * Looking for test storage... 00:31:40.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.564 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.565 --rc genhtml_branch_coverage=1 00:31:40.565 --rc genhtml_function_coverage=1 00:31:40.565 --rc genhtml_legend=1 00:31:40.565 --rc geninfo_all_blocks=1 00:31:40.565 --rc geninfo_unexecuted_blocks=1 00:31:40.565 00:31:40.565 ' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.565 --rc genhtml_branch_coverage=1 00:31:40.565 --rc genhtml_function_coverage=1 00:31:40.565 --rc genhtml_legend=1 00:31:40.565 --rc geninfo_all_blocks=1 00:31:40.565 --rc geninfo_unexecuted_blocks=1 00:31:40.565 00:31:40.565 ' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.565 --rc genhtml_branch_coverage=1 00:31:40.565 --rc genhtml_function_coverage=1 00:31:40.565 --rc genhtml_legend=1 00:31:40.565 --rc geninfo_all_blocks=1 00:31:40.565 --rc geninfo_unexecuted_blocks=1 00:31:40.565 00:31:40.565 ' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.565 --rc genhtml_branch_coverage=1 00:31:40.565 --rc genhtml_function_coverage=1 00:31:40.565 --rc genhtml_legend=1 00:31:40.565 --rc geninfo_all_blocks=1 00:31:40.565 --rc geninfo_unexecuted_blocks=1 00:31:40.565 00:31:40.565 ' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.565 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.566 00:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:48.698 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:48.698 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:48.698 Found net devices under 0000:af:00.0: cvl_0_0 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:48.698 Found net devices under 0000:af:00.1: cvl_0_1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.698 00:13:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:31:48.698 00:31:48.698 --- 10.0.0.2 ping statistics --- 00:31:48.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.698 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:48.698 00:31:48.698 --- 10.0.0.1 ping statistics --- 00:31:48.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.698 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=535865 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 535865 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 535865 ']' 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 [2024-12-10 00:13:32.169422] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:31:48.698 [2024-12-10 00:13:32.169470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.698 [2024-12-10 00:13:32.263804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.698 [2024-12-10 00:13:32.300781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.698 [2024-12-10 00:13:32.300819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.698 [2024-12-10 00:13:32.300834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.698 [2024-12-10 00:13:32.300842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.698 [2024-12-10 00:13:32.300849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.698 [2024-12-10 00:13:32.301416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.698 00:13:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 [2024-12-10 00:13:33.051020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 [2024-12-10 00:13:33.063223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 null0 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 null1 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=536120 00:31:48.698 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 536120 /tmp/host.sock 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 536120 ']' 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:48.699 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.699 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:48.699 [2024-12-10 00:13:33.143924] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:31:48.699 [2024-12-10 00:13:33.143972] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536120 ] 00:31:48.961 [2024-12-10 00:13:33.232622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.961 [2024-12-10 00:13:33.273169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.529 00:13:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:49.789 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.049 [2024-12-10 00:13:34.290388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:50.049 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:31:50.050 00:13:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:50.624 [2024-12-10 00:13:35.040973] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:50.624 [2024-12-10 00:13:35.040992] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:50.624 [2024-12-10 00:13:35.041007] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:50.883 [2024-12-10 00:13:35.127263] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:50.883 [2024-12-10 00:13:35.181848] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:50.883 [2024-12-10 00:13:35.182640] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb32f10:1 started. 00:31:50.883 [2024-12-10 00:13:35.184109] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:50.883 [2024-12-10 00:13:35.184127] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:50.883 [2024-12-10 00:13:35.189207] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb32f10 was disconnected and freed. delete nvme_qpair. 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.143 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:51.403 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:51.404 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:51.663 [2024-12-10 00:13:35.879575] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb33350:1 started. 00:31:51.663 [2024-12-10 00:13:35.881945] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb33350 was disconnected and freed. delete nvme_qpair. 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.663 [2024-12-10 00:13:35.974948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:51.663 [2024-12-10 00:13:35.975989] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:51.663 [2024-12-10 00:13:35.976007] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.663 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:51.664 00:13:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:51.664 [2024-12-10 00:13:36.062560] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.664 [2024-12-10 00:13:36.126215] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:31:51.664 [2024-12-10 00:13:36.126248] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:51.664 [2024-12-10 00:13:36.126257] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.664 [2024-12-10 00:13:36.126264] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:51.664 00:13:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:53.045 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.046 [2024-12-10 00:13:37.235185] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:53.046 [2024-12-10 00:13:37.235206] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:53.046 [2024-12-10 00:13:37.244077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.046 [2024-12-10 00:13:37.244098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.046 [2024-12-10 00:13:37.244109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.046 [2024-12-10 00:13:37.244119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.046 [2024-12-10 00:13:37.244129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.046 [2024-12-10 00:13:37.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.046 [2024-12-10 00:13:37.244148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.046 [2024-12-10 00:13:37.244158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.046 [2024-12-10 00:13:37.244167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:53.046 [2024-12-10 00:13:37.254091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.046 [2024-12-10 00:13:37.264128] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.046 [2024-12-10 00:13:37.264140] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.046 [2024-12-10 00:13:37.264149] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.264156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.046 [2024-12-10 00:13:37.264175] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.264442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.046 [2024-12-10 00:13:37.264460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.046 [2024-12-10 00:13:37.264471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.046 [2024-12-10 00:13:37.264484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.046 [2024-12-10 00:13:37.264509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.046 [2024-12-10 00:13:37.264520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.046 [2024-12-10 00:13:37.264534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.046 [2024-12-10 00:13:37.264542] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.046 [2024-12-10 00:13:37.264548] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.046 [2024-12-10 00:13:37.264554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.046 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.046 [2024-12-10 00:13:37.274206] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.046 [2024-12-10 00:13:37.274219] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.046 [2024-12-10 00:13:37.274225] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.274231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.046 [2024-12-10 00:13:37.274248] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.046 [2024-12-10 00:13:37.274441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.046 [2024-12-10 00:13:37.274452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.046 [2024-12-10 00:13:37.274464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.046 [2024-12-10 00:13:37.274477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.046 [2024-12-10 00:13:37.274487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.046 [2024-12-10 00:13:37.274496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.046 [2024-12-10 00:13:37.274503] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.046 [2024-12-10 00:13:37.274509] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.046 [2024-12-10 00:13:37.274515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.046 [2024-12-10 00:13:37.284279] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.046 [2024-12-10 00:13:37.284294] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.046 [2024-12-10 00:13:37.284300] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.284306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.046 [2024-12-10 00:13:37.284324] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.046 [2024-12-10 00:13:37.284504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.046 [2024-12-10 00:13:37.284518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.046 [2024-12-10 00:13:37.284529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.046 [2024-12-10 00:13:37.284542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.046 [2024-12-10 00:13:37.284561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.046 [2024-12-10 00:13:37.284573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.046 [2024-12-10 00:13:37.284583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.046 [2024-12-10 00:13:37.284590] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.284597] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.284602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:53.047 [2024-12-10 00:13:37.294354] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.047 [2024-12-10 00:13:37.294368] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.047 [2024-12-10 00:13:37.294374] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.294379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.047 [2024-12-10 00:13:37.294396] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.047 [2024-12-10 00:13:37.294662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.047 [2024-12-10 00:13:37.294676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.047 [2024-12-10 00:13:37.294688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.047 [2024-12-10 00:13:37.294707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.047 [2024-12-10 00:13:37.294717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.047 [2024-12-10 00:13:37.294726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.047 [2024-12-10 00:13:37.294734] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.294740] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.294745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.047 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:53.047 [2024-12-10 00:13:37.304427] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.047 [2024-12-10 00:13:37.304442] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.047 [2024-12-10 00:13:37.304448] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.304454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.047 [2024-12-10 00:13:37.304472] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.304601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.047 [2024-12-10 00:13:37.304618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.047 [2024-12-10 00:13:37.304628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.047 [2024-12-10 00:13:37.304641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.047 [2024-12-10 00:13:37.304670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.047 [2024-12-10 00:13:37.304679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.047 [2024-12-10 00:13:37.304690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.047 [2024-12-10 00:13:37.304698] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.304705] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.304711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.047 [2024-12-10 00:13:37.314502] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.047 [2024-12-10 00:13:37.314514] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.047 [2024-12-10 00:13:37.314520] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.314526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.047 [2024-12-10 00:13:37.314542] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.314698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.047 [2024-12-10 00:13:37.314711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.047 [2024-12-10 00:13:37.314721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.047 [2024-12-10 00:13:37.314733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.047 [2024-12-10 00:13:37.314745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.047 [2024-12-10 00:13:37.314754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.047 [2024-12-10 00:13:37.314765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.047 [2024-12-10 00:13:37.314777] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.314783] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.314789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.047 [2024-12-10 00:13:37.324573] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.047 [2024-12-10 00:13:37.324587] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.047 [2024-12-10 00:13:37.324593] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.324599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.047 [2024-12-10 00:13:37.324616] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.324710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.047 [2024-12-10 00:13:37.324724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.047 [2024-12-10 00:13:37.324733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.047 [2024-12-10 00:13:37.324745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.047 [2024-12-10 00:13:37.324763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.047 [2024-12-10 00:13:37.324772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.047 [2024-12-10 00:13:37.324781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.047 [2024-12-10 00:13:37.324789] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.324795] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.324801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.047 [2024-12-10 00:13:37.334646] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.047 [2024-12-10 00:13:37.334659] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.047 [2024-12-10 00:13:37.334665] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.334670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.047 [2024-12-10 00:13:37.334687] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.047 [2024-12-10 00:13:37.334919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.047 [2024-12-10 00:13:37.334934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.047 [2024-12-10 00:13:37.334944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.047 [2024-12-10 00:13:37.334956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.047 [2024-12-10 00:13:37.334968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.047 [2024-12-10 00:13:37.334977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.047 [2024-12-10 00:13:37.334992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.047 [2024-12-10 00:13:37.335000] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.047 [2024-12-10 00:13:37.335006] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.047 [2024-12-10 00:13:37.335012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.048 [2024-12-10 00:13:37.344718] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.048 [2024-12-10 00:13:37.344730] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.048 [2024-12-10 00:13:37.344736] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.048 [2024-12-10 00:13:37.344741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.048 [2024-12-10 00:13:37.344758] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.048 [2024-12-10 00:13:37.344907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.048 [2024-12-10 00:13:37.344921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.048 [2024-12-10 00:13:37.344931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.048 [2024-12-10 00:13:37.344946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.048 [2024-12-10 00:13:37.344964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.048 [2024-12-10 00:13:37.344974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.048 [2024-12-10 00:13:37.344985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.048 [2024-12-10 00:13:37.344994] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.048 [2024-12-10 00:13:37.345001] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.048 [2024-12-10 00:13:37.345008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:53.048 [2024-12-10 00:13:37.354788] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:53.048 [2024-12-10 00:13:37.354802] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:53.048 [2024-12-10 00:13:37.354808] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:53.048 [2024-12-10 00:13:37.354813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:53.048 [2024-12-10 00:13:37.354834] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:53.048 [2024-12-10 00:13:37.354930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:53.048 [2024-12-10 00:13:37.354944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03390 with addr=10.0.0.2, port=4420 00:31:53.048 [2024-12-10 00:13:37.354953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03390 is same with the state(6) to be set 00:31:53.048 [2024-12-10 00:13:37.354965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03390 (9): Bad file descriptor 00:31:53.048 [2024-12-10 00:13:37.354978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:53.048 [2024-12-10 00:13:37.354986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:53.048 [2024-12-10 00:13:37.354995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:53.048 [2024-12-10 00:13:37.355003] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:53.048 [2024-12-10 00:13:37.355009] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:53.048 [2024-12-10 00:13:37.355015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:53.048 [2024-12-10 00:13:37.361390] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:53.048 [2024-12-10 00:13:37.361409] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:53.048 00:13:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:53.985 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.986 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.245 00:13:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.623 [2024-12-10 00:13:39.712970] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:55.623 [2024-12-10 00:13:39.712988] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:55.623 [2024-12-10 00:13:39.713001] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:55.623 [2024-12-10 00:13:39.801262] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:55.882 [2024-12-10 00:13:40.109613] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:31:55.882 [2024-12-10 00:13:40.110231] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb381f0:1 started. 00:31:55.882 [2024-12-10 00:13:40.111968] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:55.882 [2024-12-10 00:13:40.111997] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:55.882 [2024-12-10 00:13:40.113155] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb381f0 was disconnected and freed. delete nvme_qpair. 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.882 request: 00:31:55.882 { 00:31:55.882 "name": "nvme", 00:31:55.882 "trtype": "tcp", 00:31:55.882 "traddr": "10.0.0.2", 00:31:55.882 "adrfam": "ipv4", 00:31:55.882 "trsvcid": "8009", 00:31:55.882 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:55.882 "wait_for_attach": true, 00:31:55.882 "method": "bdev_nvme_start_discovery", 00:31:55.882 "req_id": 1 00:31:55.882 } 00:31:55.882 Got JSON-RPC error response 00:31:55.882 response: 00:31:55.882 { 00:31:55.882 "code": -17, 00:31:55.882 "message": "File exists" 00:31:55.882 } 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:55.882 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.883 request: 00:31:55.883 { 00:31:55.883 "name": "nvme_second", 00:31:55.883 "trtype": "tcp", 00:31:55.883 "traddr": "10.0.0.2", 00:31:55.883 "adrfam": "ipv4", 00:31:55.883 "trsvcid": "8009", 00:31:55.883 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:55.883 "wait_for_attach": true, 00:31:55.883 "method": "bdev_nvme_start_discovery", 00:31:55.883 "req_id": 1 00:31:55.883 } 00:31:55.883 Got JSON-RPC error response 00:31:55.883 response: 00:31:55.883 { 00:31:55.883 "code": -17, 00:31:55.883 "message": "File exists" 00:31:55.883 } 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:55.883 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.142 00:13:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:57.084 [2024-12-10 00:13:41.364502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:57.084 [2024-12-10 00:13:41.364532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb32820 with addr=10.0.0.2, port=8010 00:31:57.084 [2024-12-10 00:13:41.364548] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:57.084 [2024-12-10 00:13:41.364557] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:57.084 [2024-12-10 00:13:41.364566] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:58.021 [2024-12-10 00:13:42.366875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:58.021 [2024-12-10 00:13:42.366902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33be0 with addr=10.0.0.2, port=8010 00:31:58.021 [2024-12-10 00:13:42.366918] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:58.021 [2024-12-10 00:13:42.366927] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:58.021 [2024-12-10 00:13:42.366935] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:58.958 [2024-12-10 00:13:43.369095] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:58.958 request: 00:31:58.958 { 00:31:58.958 "name": "nvme_second", 00:31:58.958 "trtype": "tcp", 00:31:58.958 "traddr": "10.0.0.2", 00:31:58.958 "adrfam": "ipv4", 00:31:58.958 "trsvcid": "8010", 00:31:58.958 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:58.958 "wait_for_attach": false, 00:31:58.958 "attach_timeout_ms": 3000, 00:31:58.958 "method": "bdev_nvme_start_discovery", 00:31:58.958 "req_id": 1 00:31:58.958 } 00:31:58.958 Got JSON-RPC error response 00:31:58.958 response: 00:31:58.958 { 00:31:58.958 "code": -110, 00:31:58.958 "message": "Connection timed out" 00:31:58.958 } 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 536120 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.958 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.217 rmmod nvme_tcp 00:31:59.217 rmmod nvme_fabrics 00:31:59.217 rmmod nvme_keyring 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 535865 ']' 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 535865 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 535865 ']' 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 535865 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 535865 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 535865' 00:31:59.217 killing process with pid 535865 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 535865 00:31:59.217 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 535865 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.477 00:13:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.383 00:32:01.383 real 0m21.217s 00:32:01.383 user 0m25.337s 00:32:01.383 sys 0m7.716s 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:01.383 ************************************ 00:32:01.383 END TEST nvmf_host_discovery 00:32:01.383 ************************************ 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.383 00:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.644 ************************************ 00:32:01.644 START TEST nvmf_host_multipath_status 00:32:01.644 ************************************ 00:32:01.644 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:01.644 * Looking for test storage... 00:32:01.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.644 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:01.644 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:01.644 00:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:01.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.644 --rc genhtml_branch_coverage=1 00:32:01.644 --rc genhtml_function_coverage=1 00:32:01.644 --rc genhtml_legend=1 00:32:01.644 --rc geninfo_all_blocks=1 00:32:01.644 --rc geninfo_unexecuted_blocks=1 00:32:01.644 00:32:01.644 ' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:01.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.644 --rc genhtml_branch_coverage=1 00:32:01.644 --rc genhtml_function_coverage=1 00:32:01.644 --rc genhtml_legend=1 00:32:01.644 --rc geninfo_all_blocks=1 00:32:01.644 --rc geninfo_unexecuted_blocks=1 00:32:01.644 00:32:01.644 ' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:01.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.644 --rc genhtml_branch_coverage=1 00:32:01.644 --rc genhtml_function_coverage=1 00:32:01.644 --rc genhtml_legend=1 00:32:01.644 --rc geninfo_all_blocks=1 00:32:01.644 --rc geninfo_unexecuted_blocks=1 00:32:01.644 00:32:01.644 ' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:01.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.644 --rc genhtml_branch_coverage=1 00:32:01.644 --rc genhtml_function_coverage=1 00:32:01.644 --rc genhtml_legend=1 00:32:01.644 --rc geninfo_all_blocks=1 00:32:01.644 --rc geninfo_unexecuted_blocks=1 00:32:01.644 00:32:01.644 ' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.644 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:01.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.645 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.905 00:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:10.032 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:10.032 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:10.032 Found net devices under 0000:af:00.0: cvl_0_0 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:10.032 Found net devices under 0000:af:00.1: cvl_0_1 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.032 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:32:10.033 00:32:10.033 --- 10.0.0.2 ping statistics --- 00:32:10.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.033 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:32:10.033 00:32:10.033 --- 10.0.0.1 ping statistics --- 00:32:10.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.033 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=541734 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 541734 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 541734 ']' 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.033 00:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:10.033 [2024-12-10 00:13:53.480552] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:32:10.033 [2024-12-10 00:13:53.480607] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.033 [2024-12-10 00:13:53.579994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:10.033 [2024-12-10 00:13:53.617201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.033 [2024-12-10 00:13:53.617243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.033 [2024-12-10 00:13:53.617253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.033 [2024-12-10 00:13:53.617261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.033 [2024-12-10 00:13:53.617268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.033 [2024-12-10 00:13:53.618551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.033 [2024-12-10 00:13:53.618552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=541734 00:32:10.033 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.292 [2024-12-10 00:13:54.536736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.292 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:10.551 Malloc0 00:32:10.552 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:10.552 00:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:10.814 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.075 [2024-12-10 00:13:55.368921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.075 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.341 [2024-12-10 00:13:55.565430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=542118 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 542118 /var/tmp/bdevperf.sock 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 542118 ']' 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.341 00:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:12.279 00:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.279 00:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:12.279 00:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:12.280 00:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:12.848 Nvme0n1 00:32:12.848 00:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:13.108 Nvme0n1 00:32:13.108 00:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:13.108 00:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:15.015 00:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:15.015 00:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:15.275 00:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:15.534 00:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:16.474 00:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:16.474 00:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:16.474 00:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.474 00:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:16.732 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.732 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:16.732 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.732 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:16.991 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.250 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.250 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.250 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.250 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:17.508 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.508 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:17.508 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.508 00:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:17.767 00:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.767 00:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:17.767 00:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.026 00:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.285 00:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:19.222 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:19.222 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:19.222 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.222 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.481 00:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:19.739 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:19.739 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:19.739 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:19.739 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.003 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.004 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.004 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.004 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:20.285 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.285 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:20.285 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.285 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:20.544 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.544 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:20.544 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:20.544 00:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:20.811 00:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:21.754 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:21.755 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:21.755 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.755 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:22.014 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.014 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:22.014 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:22.014 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.274 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:22.274 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:22.274 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.274 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:22.533 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.533 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:22.533 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.533 00:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.792 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:23.051 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.051 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:23.051 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:23.309 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:23.569 00:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:24.507 00:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:24.507 00:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:24.507 00:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.507 00:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.766 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:25.028 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.028 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:25.028 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.028 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:25.288 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.288 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:25.288 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.288 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:25.547 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:25.547 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:25.547 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:25.547 00:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:25.547 00:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:25.547 00:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:25.547 00:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:25.806 00:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:26.065 00:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:27.001 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:27.001 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:27.001 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.001 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:27.260 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.260 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:27.260 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.260 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:27.522 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:27.522 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:27.522 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.522 00:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:27.780 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.780 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:27.780 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:27.780 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:27.780 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:27.781 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:27.781 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:27.781 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.040 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:28.040 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:28.040 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:28.040 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:28.299 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:28.299 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:28.299 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:28.569 00:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:28.569 00:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:29.951 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:30.210 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.210 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:30.210 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.210 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:30.470 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.470 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:30.470 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.470 00:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:30.729 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:30.729 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:30.729 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:30.729 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:30.988 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:30.988 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:30.988 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:30.988 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:31.247 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:31.506 00:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:32.442 00:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:32.442 00:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:32.443 00:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.443 00:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:32.701 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.701 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:32.701 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.701 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:32.960 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:32.960 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:32.960 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:32.960 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.219 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:33.478 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.478 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:33.478 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:33.478 00:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.737 00:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:33.737 00:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:33.737 00:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:33.996 00:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:34.256 00:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:35.199 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:35.199 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:35.199 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.199 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.459 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:35.721 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.721 00:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:35.721 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.721 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:35.721 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:35.721 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.981 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.981 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:35.981 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.981 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:36.240 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.240 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:36.240 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.240 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:36.500 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.500 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:36.500 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:36.500 00:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:36.759 00:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:37.695 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:37.695 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:37.695 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.695 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:37.955 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.955 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:37.955 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.955 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:38.214 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.215 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:38.215 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.215 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.473 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.473 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.473 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.473 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:38.733 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.733 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:38.733 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.733 00:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:38.733 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.733 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:38.733 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:38.733 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.991 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.991 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:38.991 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:39.250 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:39.510 00:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:40.454 00:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:40.454 00:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:40.454 00:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.454 00:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:40.714 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.714 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:40.714 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.714 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.974 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:41.233 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.233 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:41.233 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.233 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:41.492 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.492 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:41.492 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.492 00:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 542118 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 542118 ']' 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 542118 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542118 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542118' 00:32:41.752 killing process with pid 542118 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 542118 00:32:41.752 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 542118 00:32:41.752 { 00:32:41.752 "results": [ 00:32:41.752 { 00:32:41.752 "job": "Nvme0n1", 00:32:41.752 "core_mask": "0x4", 00:32:41.752 "workload": "verify", 00:32:41.752 "status": "terminated", 00:32:41.752 "verify_range": { 00:32:41.752 "start": 0, 00:32:41.752 "length": 16384 00:32:41.752 }, 00:32:41.752 "queue_depth": 128, 00:32:41.752 "io_size": 4096, 00:32:41.752 "runtime": 28.564414, 00:32:41.752 "iops": 11001.380949036798, 00:32:41.752 "mibps": 42.974144332174994, 00:32:41.752 "io_failed": 0, 00:32:41.752 "io_timeout": 0, 00:32:41.752 "avg_latency_us": 11615.080335592273, 00:32:41.752 "min_latency_us": 250.6752, 00:32:41.752 "max_latency_us": 3019898.88 00:32:41.752 } 00:32:41.752 ], 00:32:41.752 "core_count": 1 00:32:41.752 } 00:32:42.017 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 542118 00:32:42.017 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:42.017 [2024-12-10 00:13:55.641079] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:32:42.017 [2024-12-10 00:13:55.641135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542118 ] 00:32:42.017 [2024-12-10 00:13:55.733552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.017 [2024-12-10 00:13:55.772466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:42.017 Running I/O for 90 seconds... 00:32:42.017 11943.00 IOPS, 46.65 MiB/s [2024-12-09T23:14:26.490Z] 11990.00 IOPS, 46.84 MiB/s [2024-12-09T23:14:26.490Z] 12019.33 IOPS, 46.95 MiB/s [2024-12-09T23:14:26.490Z] 11983.25 IOPS, 46.81 MiB/s [2024-12-09T23:14:26.490Z] 12012.00 IOPS, 46.92 MiB/s [2024-12-09T23:14:26.490Z] 11981.17 IOPS, 46.80 MiB/s [2024-12-09T23:14:26.490Z] 11975.86 IOPS, 46.78 MiB/s [2024-12-09T23:14:26.490Z] 11959.00 IOPS, 46.71 MiB/s [2024-12-09T23:14:26.490Z] 11950.78 IOPS, 46.68 MiB/s [2024-12-09T23:14:26.490Z] 11941.80 IOPS, 46.65 MiB/s [2024-12-09T23:14:26.490Z] 11933.00 IOPS, 46.61 MiB/s [2024-12-09T23:14:26.490Z] 11927.08 IOPS, 46.59 MiB/s [2024-12-09T23:14:26.490Z] [2024-12-10 00:14:10.182393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.017 [2024-12-10 00:14:10.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:42.017 [2024-12-10 00:14:10.182477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.017 [2024-12-10 00:14:10.182487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:42.017 [2024-12-10 00:14:10.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.017 [2024-12-10 00:14:10.182512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:42.017 [2024-12-10 00:14:10.182526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.017 [2024-12-10 00:14:10.182535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:42.017 [2024-12-10 00:14:10.182549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.017 [2024-12-10 00:14:10.182559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:42.017 [2024-12-10 00:14:10.182573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.017 [2024-12-10 00:14:10.182582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.182973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.182989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:42.018 [2024-12-10 00:14:10.183931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.018 [2024-12-10 00:14:10.183940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.183956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.183966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.183982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:42.019 [2024-12-10 00:14:10.184969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.019 [2024-12-10 00:14:10.184979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.184997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.020 [2024-12-10 00:14:10.185938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.185988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.185997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.186018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.186027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.186047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.186056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.186076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.186086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.186106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.186116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.020 [2024-12-10 00:14:10.186137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.020 [2024-12-10 00:14:10.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:10.186402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:10.186412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:42.021 11613.46 IOPS, 45.37 MiB/s [2024-12-09T23:14:26.494Z] 10783.93 IOPS, 42.12 MiB/s [2024-12-09T23:14:26.494Z] 10065.00 IOPS, 39.32 MiB/s [2024-12-09T23:14:26.494Z] 9687.75 IOPS, 37.84 MiB/s [2024-12-09T23:14:26.494Z] 9809.53 IOPS, 38.32 MiB/s [2024-12-09T23:14:26.494Z] 9934.78 IOPS, 38.81 MiB/s [2024-12-09T23:14:26.494Z] 10123.21 IOPS, 39.54 MiB/s [2024-12-09T23:14:26.494Z] 10311.55 IOPS, 40.28 MiB/s [2024-12-09T23:14:26.494Z] 10467.48 IOPS, 40.89 MiB/s [2024-12-09T23:14:26.494Z] 10524.05 IOPS, 41.11 MiB/s [2024-12-09T23:14:26.494Z] 10582.35 IOPS, 41.34 MiB/s [2024-12-09T23:14:26.494Z] 10669.17 IOPS, 41.68 MiB/s [2024-12-09T23:14:26.494Z] 10790.56 IOPS, 42.15 MiB/s [2024-12-09T23:14:26.494Z] 10904.81 IOPS, 42.60 MiB/s [2024-12-09T23:14:26.494Z] [2024-12-10 00:14:23.799977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:23.800019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:23.800065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:23.800089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.021 [2024-12-10 00:14:23.800118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.800924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.800936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:42.021 [2024-12-10 00:14:23.801442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.021 [2024-12-10 00:14:23.801451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.801962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.801977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.801987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.802002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.802011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.802025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.802049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.802060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.802075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.022 [2024-12-10 00:14:23.802084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:42.022 [2024-12-10 00:14:23.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.022 [2024-12-10 00:14:23.803316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:42.023 [2024-12-10 00:14:23.803331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.023 [2024-12-10 00:14:23.803340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:42.023 [2024-12-10 00:14:23.803354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.023 [2024-12-10 00:14:23.803364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:42.023 [2024-12-10 00:14:23.803379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:42.023 [2024-12-10 00:14:23.803389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:42.023 10958.30 IOPS, 42.81 MiB/s [2024-12-09T23:14:26.496Z] 10987.71 IOPS, 42.92 MiB/s [2024-12-09T23:14:26.496Z] Received shutdown signal, test time was about 28.565039 seconds 00:32:42.023 00:32:42.023 Latency(us) 00:32:42.023 [2024-12-09T23:14:26.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.023 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:42.023 Verification LBA range: start 0x0 length 0x4000 00:32:42.023 Nvme0n1 : 28.56 11001.38 42.97 0.00 0.00 11615.08 250.68 3019898.88 00:32:42.023 [2024-12-09T23:14:26.496Z] =================================================================================================================== 00:32:42.023 [2024-12-09T23:14:26.496Z] Total : 11001.38 42.97 0.00 0.00 11615.08 250.68 3019898.88 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.023 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.282 rmmod nvme_tcp 00:32:42.282 rmmod nvme_fabrics 00:32:42.282 rmmod nvme_keyring 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 541734 ']' 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 541734 00:32:42.282 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 541734 ']' 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 541734 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 541734 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 541734' 00:32:42.283 killing process with pid 541734 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 541734 00:32:42.283 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 541734 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.543 00:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.521 00:32:44.521 real 0m43.013s 00:32:44.521 user 1m50.361s 00:32:44.521 sys 0m15.114s 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:44.521 ************************************ 00:32:44.521 END TEST nvmf_host_multipath_status 00:32:44.521 ************************************ 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.521 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.812 ************************************ 00:32:44.812 START TEST nvmf_discovery_remove_ifc 00:32:44.812 ************************************ 00:32:44.812 00:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:44.812 * Looking for test storage... 00:32:44.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.812 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:44.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.813 --rc genhtml_branch_coverage=1 00:32:44.813 --rc genhtml_function_coverage=1 00:32:44.813 --rc genhtml_legend=1 00:32:44.813 --rc geninfo_all_blocks=1 00:32:44.813 --rc geninfo_unexecuted_blocks=1 00:32:44.813 00:32:44.813 ' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:44.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.813 --rc genhtml_branch_coverage=1 00:32:44.813 --rc genhtml_function_coverage=1 00:32:44.813 --rc genhtml_legend=1 00:32:44.813 --rc geninfo_all_blocks=1 00:32:44.813 --rc geninfo_unexecuted_blocks=1 00:32:44.813 00:32:44.813 ' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:44.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.813 --rc genhtml_branch_coverage=1 00:32:44.813 --rc genhtml_function_coverage=1 00:32:44.813 --rc genhtml_legend=1 00:32:44.813 --rc geninfo_all_blocks=1 00:32:44.813 --rc geninfo_unexecuted_blocks=1 00:32:44.813 00:32:44.813 ' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:44.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.813 --rc genhtml_branch_coverage=1 00:32:44.813 --rc genhtml_function_coverage=1 00:32:44.813 --rc genhtml_legend=1 00:32:44.813 --rc geninfo_all_blocks=1 00:32:44.813 --rc geninfo_unexecuted_blocks=1 00:32:44.813 00:32:44.813 ' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.813 00:14:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:53.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.048 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:53.049 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:53.049 Found net devices under 0000:af:00.0: cvl_0_0 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:53.049 Found net devices under 0000:af:00.1: cvl_0_1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:32:53.049 00:32:53.049 --- 10.0.0.2 ping statistics --- 00:32:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.049 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:53.049 00:32:53.049 --- 10.0.0.1 ping statistics --- 00:32:53.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.049 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=551055 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 551055 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 551055 ']' 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.049 00:14:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.049 [2024-12-10 00:14:36.493243] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:32:53.049 [2024-12-10 00:14:36.493292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.049 [2024-12-10 00:14:36.587152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.049 [2024-12-10 00:14:36.623915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.050 [2024-12-10 00:14:36.623952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.050 [2024-12-10 00:14:36.623961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.050 [2024-12-10 00:14:36.623969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.050 [2024-12-10 00:14:36.623976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.050 [2024-12-10 00:14:36.624552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 [2024-12-10 00:14:37.385507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.050 [2024-12-10 00:14:37.393690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:53.050 null0 00:32:53.050 [2024-12-10 00:14:37.425666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=551297 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 551297 /tmp/host.sock 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 551297 ']' 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:53.050 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.050 00:14:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 [2024-12-10 00:14:37.500204] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:32:53.050 [2024-12-10 00:14:37.500256] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551297 ] 00:32:53.309 [2024-12-10 00:14:37.588986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.309 [2024-12-10 00:14:37.630344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.887 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:54.155 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.155 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:54.155 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.155 00:14:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.092 [2024-12-10 00:14:39.464250] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:55.092 [2024-12-10 00:14:39.464271] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:55.092 [2024-12-10 00:14:39.464287] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:55.359 [2024-12-10 00:14:39.590661] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:55.359 [2024-12-10 00:14:39.805784] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:55.359 [2024-12-10 00:14:39.806571] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cd4ac0:1 started. 00:32:55.359 [2024-12-10 00:14:39.808034] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:55.359 [2024-12-10 00:14:39.808076] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:55.359 [2024-12-10 00:14:39.808100] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:55.359 [2024-12-10 00:14:39.808114] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:55.359 [2024-12-10 00:14:39.808134] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.359 [2024-12-10 00:14:39.813033] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cd4ac0 was disconnected and freed. delete nvme_qpair. 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.359 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.360 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:55.622 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.623 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:55.623 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.623 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:55.623 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:55.623 00:14:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:55.623 00:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.623 00:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:55.623 00:14:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:56.560 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:56.819 00:14:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:57.757 00:14:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:58.694 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:58.954 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.954 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:58.954 00:14:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:59.898 00:14:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.835 [2024-12-10 00:14:45.249359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:00.835 [2024-12-10 00:14:45.249408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.835 [2024-12-10 00:14:45.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.835 [2024-12-10 00:14:45.249435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.835 [2024-12-10 00:14:45.249444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.835 [2024-12-10 00:14:45.249454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.835 [2024-12-10 00:14:45.249463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.835 [2024-12-10 00:14:45.249472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.835 [2024-12-10 00:14:45.249481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.835 [2024-12-10 00:14:45.249492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.835 [2024-12-10 00:14:45.249500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.835 [2024-12-10 00:14:45.249510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb1290 is same with the state(6) to be set 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:00.835 [2024-12-10 00:14:45.259380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb1290 (9): Bad file descriptor 00:33:00.835 [2024-12-10 00:14:45.269416] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.835 [2024-12-10 00:14:45.269429] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.835 [2024-12-10 00:14:45.269438] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.835 [2024-12-10 00:14:45.269445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.835 [2024-12-10 00:14:45.269469] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:00.835 00:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:02.213 [2024-12-10 00:14:46.323901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:02.213 [2024-12-10 00:14:46.323987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb1290 with addr=10.0.0.2, port=4420 00:33:02.213 [2024-12-10 00:14:46.324027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb1290 is same with the state(6) to be set 00:33:02.213 [2024-12-10 00:14:46.324086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb1290 (9): Bad file descriptor 00:33:02.213 [2024-12-10 00:14:46.325054] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:02.213 [2024-12-10 00:14:46.325128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:02.213 [2024-12-10 00:14:46.325161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:02.213 [2024-12-10 00:14:46.325193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:02.213 [2024-12-10 00:14:46.325220] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:02.213 [2024-12-10 00:14:46.325243] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:02.213 [2024-12-10 00:14:46.325263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:02.213 [2024-12-10 00:14:46.325293] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:02.213 [2024-12-10 00:14:46.325314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:02.213 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.214 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:02.214 00:14:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:03.150 [2024-12-10 00:14:47.327830] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:03.150 [2024-12-10 00:14:47.327851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:03.150 [2024-12-10 00:14:47.327863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:03.150 [2024-12-10 00:14:47.327873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:03.150 [2024-12-10 00:14:47.327882] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:03.150 [2024-12-10 00:14:47.327891] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:03.150 [2024-12-10 00:14:47.327897] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:03.150 [2024-12-10 00:14:47.327903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:03.150 [2024-12-10 00:14:47.327923] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:03.150 [2024-12-10 00:14:47.327943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.150 [2024-12-10 00:14:47.327954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.150 [2024-12-10 00:14:47.327964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.150 [2024-12-10 00:14:47.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.150 [2024-12-10 00:14:47.327992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.150 [2024-12-10 00:14:47.328001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.150 [2024-12-10 00:14:47.328010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.150 [2024-12-10 00:14:47.328019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.150 [2024-12-10 00:14:47.328028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.150 [2024-12-10 00:14:47.328037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.150 [2024-12-10 00:14:47.328045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:03.150 [2024-12-10 00:14:47.328230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca09e0 (9): Bad file descriptor 00:33:03.151 [2024-12-10 00:14:47.329242] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:03.151 [2024-12-10 00:14:47.329254] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:03.151 00:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:04.530 00:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:05.098 [2024-12-10 00:14:49.338751] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:05.098 [2024-12-10 00:14:49.338768] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:05.098 [2024-12-10 00:14:49.338781] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:05.098 [2024-12-10 00:14:49.427039] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:05.098 [2024-12-10 00:14:49.488687] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:05.098 [2024-12-10 00:14:49.489204] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1cb35d0:1 started. 00:33:05.098 [2024-12-10 00:14:49.490256] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:05.098 [2024-12-10 00:14:49.490287] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:05.098 [2024-12-10 00:14:49.490307] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:05.098 [2024-12-10 00:14:49.490321] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:05.098 [2024-12-10 00:14:49.490329] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:05.098 [2024-12-10 00:14:49.497704] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1cb35d0 was disconnected and freed. delete nvme_qpair. 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 551297 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 551297 ']' 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 551297 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551297 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551297' 00:33:05.357 killing process with pid 551297 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 551297 00:33:05.357 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 551297 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.616 rmmod nvme_tcp 00:33:05.616 rmmod nvme_fabrics 00:33:05.616 rmmod nvme_keyring 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 551055 ']' 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 551055 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 551055 ']' 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 551055 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.616 00:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551055 00:33:05.616 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.616 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.616 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551055' 00:33:05.616 killing process with pid 551055 00:33:05.616 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 551055 00:33:05.616 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 551055 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.876 00:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.421 00:33:08.421 real 0m23.282s 00:33:08.421 user 0m27.151s 00:33:08.421 sys 0m7.632s 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.421 ************************************ 00:33:08.421 END TEST nvmf_discovery_remove_ifc 00:33:08.421 ************************************ 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.421 ************************************ 00:33:08.421 START TEST nvmf_identify_kernel_target 00:33:08.421 ************************************ 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:08.421 * Looking for test storage... 00:33:08.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.421 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:08.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.421 --rc genhtml_branch_coverage=1 00:33:08.421 --rc genhtml_function_coverage=1 00:33:08.421 --rc genhtml_legend=1 00:33:08.421 --rc geninfo_all_blocks=1 00:33:08.421 --rc geninfo_unexecuted_blocks=1 00:33:08.421 00:33:08.421 ' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:08.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.422 --rc genhtml_branch_coverage=1 00:33:08.422 --rc genhtml_function_coverage=1 00:33:08.422 --rc genhtml_legend=1 00:33:08.422 --rc geninfo_all_blocks=1 00:33:08.422 --rc geninfo_unexecuted_blocks=1 00:33:08.422 00:33:08.422 ' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:08.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.422 --rc genhtml_branch_coverage=1 00:33:08.422 --rc genhtml_function_coverage=1 00:33:08.422 --rc genhtml_legend=1 00:33:08.422 --rc geninfo_all_blocks=1 00:33:08.422 --rc geninfo_unexecuted_blocks=1 00:33:08.422 00:33:08.422 ' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:08.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.422 --rc genhtml_branch_coverage=1 00:33:08.422 --rc genhtml_function_coverage=1 00:33:08.422 --rc genhtml_legend=1 00:33:08.422 --rc geninfo_all_blocks=1 00:33:08.422 --rc geninfo_unexecuted_blocks=1 00:33:08.422 00:33:08.422 ' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.422 00:14:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.547 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:16.548 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:16.548 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:16.548 Found net devices under 0000:af:00.0: cvl_0_0 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:16.548 Found net devices under 0000:af:00.1: cvl_0_1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:33:16.548 00:33:16.548 --- 10.0.0.2 ping statistics --- 00:33:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.548 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:33:16.548 00:33:16.548 --- 10.0.0.1 ping statistics --- 00:33:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.548 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:16.548 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:16.549 00:14:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:19.087 Waiting for block devices as requested 00:33:19.087 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:19.087 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:19.087 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:19.087 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:19.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:19.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:19.346 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:19.605 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:19.605 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:19.605 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:19.863 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:19.863 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:19.863 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:20.123 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:20.123 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:20.123 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:20.382 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:20.382 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:20.642 No valid GPT data, bailing 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:33:20.642 00:33:20.642 Discovery Log Number of Records 2, Generation counter 2 00:33:20.642 =====Discovery Log Entry 0====== 00:33:20.642 trtype: tcp 00:33:20.642 adrfam: ipv4 00:33:20.642 subtype: current discovery subsystem 00:33:20.642 treq: not specified, sq flow control disable supported 00:33:20.642 portid: 1 00:33:20.642 trsvcid: 4420 00:33:20.642 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:20.642 traddr: 10.0.0.1 00:33:20.642 eflags: none 00:33:20.642 sectype: none 00:33:20.642 =====Discovery Log Entry 1====== 00:33:20.642 trtype: tcp 00:33:20.642 adrfam: ipv4 00:33:20.642 subtype: nvme subsystem 00:33:20.642 treq: not specified, sq flow control disable supported 00:33:20.642 portid: 1 00:33:20.642 trsvcid: 4420 00:33:20.642 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:20.642 traddr: 10.0.0.1 00:33:20.642 eflags: none 00:33:20.642 sectype: none 00:33:20.642 00:15:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:20.642 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:20.642 ===================================================== 00:33:20.642 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:20.642 ===================================================== 00:33:20.642 Controller Capabilities/Features 00:33:20.642 ================================ 00:33:20.642 Vendor ID: 0000 00:33:20.642 Subsystem Vendor ID: 0000 00:33:20.642 Serial Number: 66e463eb55582e90cffe 00:33:20.642 Model Number: Linux 00:33:20.642 Firmware Version: 6.8.9-20 00:33:20.642 Recommended Arb Burst: 0 00:33:20.642 IEEE OUI Identifier: 00 00 00 00:33:20.642 Multi-path I/O 00:33:20.642 May have multiple subsystem ports: No 00:33:20.642 May have multiple controllers: No 00:33:20.642 Associated with SR-IOV VF: No 00:33:20.642 Max Data Transfer Size: Unlimited 00:33:20.643 Max Number of Namespaces: 0 00:33:20.643 Max Number of I/O Queues: 1024 00:33:20.643 NVMe Specification Version (VS): 1.3 00:33:20.643 NVMe Specification Version (Identify): 1.3 00:33:20.643 Maximum Queue Entries: 1024 00:33:20.643 Contiguous Queues Required: No 00:33:20.643 Arbitration Mechanisms Supported 00:33:20.643 Weighted Round Robin: Not Supported 00:33:20.643 Vendor Specific: Not Supported 00:33:20.643 Reset Timeout: 7500 ms 00:33:20.643 Doorbell Stride: 4 bytes 00:33:20.643 NVM Subsystem Reset: Not Supported 00:33:20.643 Command Sets Supported 00:33:20.643 NVM Command Set: Supported 00:33:20.643 Boot Partition: Not Supported 00:33:20.643 Memory Page Size Minimum: 4096 bytes 00:33:20.643 Memory Page Size Maximum: 4096 bytes 00:33:20.643 Persistent Memory Region: Not Supported 00:33:20.643 Optional Asynchronous Events Supported 00:33:20.643 Namespace Attribute Notices: Not Supported 00:33:20.643 Firmware Activation Notices: Not Supported 00:33:20.643 ANA Change Notices: Not Supported 00:33:20.643 PLE Aggregate Log Change Notices: Not Supported 00:33:20.643 LBA Status Info Alert Notices: Not Supported 00:33:20.643 EGE Aggregate Log Change Notices: Not Supported 00:33:20.643 Normal NVM Subsystem Shutdown event: Not Supported 00:33:20.643 Zone Descriptor Change Notices: Not Supported 00:33:20.643 Discovery Log Change Notices: Supported 00:33:20.643 Controller Attributes 00:33:20.643 128-bit Host Identifier: Not Supported 00:33:20.643 Non-Operational Permissive Mode: Not Supported 00:33:20.643 NVM Sets: Not Supported 00:33:20.643 Read Recovery Levels: Not Supported 00:33:20.643 Endurance Groups: Not Supported 00:33:20.643 Predictable Latency Mode: Not Supported 00:33:20.643 Traffic Based Keep ALive: Not Supported 00:33:20.643 Namespace Granularity: Not Supported 00:33:20.643 SQ Associations: Not Supported 00:33:20.643 UUID List: Not Supported 00:33:20.643 Multi-Domain Subsystem: Not Supported 00:33:20.643 Fixed Capacity Management: Not Supported 00:33:20.643 Variable Capacity Management: Not Supported 00:33:20.643 Delete Endurance Group: Not Supported 00:33:20.643 Delete NVM Set: Not Supported 00:33:20.643 Extended LBA Formats Supported: Not Supported 00:33:20.643 Flexible Data Placement Supported: Not Supported 00:33:20.643 00:33:20.643 Controller Memory Buffer Support 00:33:20.643 ================================ 00:33:20.643 Supported: No 00:33:20.643 00:33:20.643 Persistent Memory Region Support 00:33:20.643 ================================ 00:33:20.643 Supported: No 00:33:20.643 00:33:20.643 Admin Command Set Attributes 00:33:20.643 ============================ 00:33:20.643 Security Send/Receive: Not Supported 00:33:20.643 Format NVM: Not Supported 00:33:20.643 Firmware Activate/Download: Not Supported 00:33:20.643 Namespace Management: Not Supported 00:33:20.643 Device Self-Test: Not Supported 00:33:20.643 Directives: Not Supported 00:33:20.643 NVMe-MI: Not Supported 00:33:20.643 Virtualization Management: Not Supported 00:33:20.643 Doorbell Buffer Config: Not Supported 00:33:20.643 Get LBA Status Capability: Not Supported 00:33:20.643 Command & Feature Lockdown Capability: Not Supported 00:33:20.643 Abort Command Limit: 1 00:33:20.643 Async Event Request Limit: 1 00:33:20.643 Number of Firmware Slots: N/A 00:33:20.643 Firmware Slot 1 Read-Only: N/A 00:33:20.643 Firmware Activation Without Reset: N/A 00:33:20.643 Multiple Update Detection Support: N/A 00:33:20.643 Firmware Update Granularity: No Information Provided 00:33:20.643 Per-Namespace SMART Log: No 00:33:20.643 Asymmetric Namespace Access Log Page: Not Supported 00:33:20.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:20.643 Command Effects Log Page: Not Supported 00:33:20.643 Get Log Page Extended Data: Supported 00:33:20.643 Telemetry Log Pages: Not Supported 00:33:20.643 Persistent Event Log Pages: Not Supported 00:33:20.643 Supported Log Pages Log Page: May Support 00:33:20.643 Commands Supported & Effects Log Page: Not Supported 00:33:20.643 Feature Identifiers & Effects Log Page:May Support 00:33:20.643 NVMe-MI Commands & Effects Log Page: May Support 00:33:20.643 Data Area 4 for Telemetry Log: Not Supported 00:33:20.643 Error Log Page Entries Supported: 1 00:33:20.643 Keep Alive: Not Supported 00:33:20.643 00:33:20.643 NVM Command Set Attributes 00:33:20.643 ========================== 00:33:20.643 Submission Queue Entry Size 00:33:20.643 Max: 1 00:33:20.643 Min: 1 00:33:20.643 Completion Queue Entry Size 00:33:20.643 Max: 1 00:33:20.643 Min: 1 00:33:20.643 Number of Namespaces: 0 00:33:20.643 Compare Command: Not Supported 00:33:20.643 Write Uncorrectable Command: Not Supported 00:33:20.643 Dataset Management Command: Not Supported 00:33:20.643 Write Zeroes Command: Not Supported 00:33:20.643 Set Features Save Field: Not Supported 00:33:20.643 Reservations: Not Supported 00:33:20.643 Timestamp: Not Supported 00:33:20.643 Copy: Not Supported 00:33:20.643 Volatile Write Cache: Not Present 00:33:20.643 Atomic Write Unit (Normal): 1 00:33:20.643 Atomic Write Unit (PFail): 1 00:33:20.643 Atomic Compare & Write Unit: 1 00:33:20.643 Fused Compare & Write: Not Supported 00:33:20.643 Scatter-Gather List 00:33:20.643 SGL Command Set: Supported 00:33:20.643 SGL Keyed: Not Supported 00:33:20.643 SGL Bit Bucket Descriptor: Not Supported 00:33:20.643 SGL Metadata Pointer: Not Supported 00:33:20.643 Oversized SGL: Not Supported 00:33:20.643 SGL Metadata Address: Not Supported 00:33:20.643 SGL Offset: Supported 00:33:20.643 Transport SGL Data Block: Not Supported 00:33:20.643 Replay Protected Memory Block: Not Supported 00:33:20.643 00:33:20.643 Firmware Slot Information 00:33:20.643 ========================= 00:33:20.643 Active slot: 0 00:33:20.643 00:33:20.643 00:33:20.643 Error Log 00:33:20.643 ========= 00:33:20.643 00:33:20.643 Active Namespaces 00:33:20.643 ================= 00:33:20.643 Discovery Log Page 00:33:20.643 ================== 00:33:20.643 Generation Counter: 2 00:33:20.643 Number of Records: 2 00:33:20.643 Record Format: 0 00:33:20.643 00:33:20.643 Discovery Log Entry 0 00:33:20.643 ---------------------- 00:33:20.643 Transport Type: 3 (TCP) 00:33:20.643 Address Family: 1 (IPv4) 00:33:20.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:20.643 Entry Flags: 00:33:20.643 Duplicate Returned Information: 0 00:33:20.643 Explicit Persistent Connection Support for Discovery: 0 00:33:20.643 Transport Requirements: 00:33:20.643 Secure Channel: Not Specified 00:33:20.643 Port ID: 1 (0x0001) 00:33:20.643 Controller ID: 65535 (0xffff) 00:33:20.643 Admin Max SQ Size: 32 00:33:20.643 Transport Service Identifier: 4420 00:33:20.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:20.643 Transport Address: 10.0.0.1 00:33:20.643 Discovery Log Entry 1 00:33:20.643 ---------------------- 00:33:20.643 Transport Type: 3 (TCP) 00:33:20.643 Address Family: 1 (IPv4) 00:33:20.643 Subsystem Type: 2 (NVM Subsystem) 00:33:20.643 Entry Flags: 00:33:20.643 Duplicate Returned Information: 0 00:33:20.643 Explicit Persistent Connection Support for Discovery: 0 00:33:20.643 Transport Requirements: 00:33:20.643 Secure Channel: Not Specified 00:33:20.643 Port ID: 1 (0x0001) 00:33:20.643 Controller ID: 65535 (0xffff) 00:33:20.643 Admin Max SQ Size: 32 00:33:20.643 Transport Service Identifier: 4420 00:33:20.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:20.643 Transport Address: 10.0.0.1 00:33:20.643 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:20.903 get_feature(0x01) failed 00:33:20.903 get_feature(0x02) failed 00:33:20.903 get_feature(0x04) failed 00:33:20.903 ===================================================== 00:33:20.903 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:20.903 ===================================================== 00:33:20.903 Controller Capabilities/Features 00:33:20.903 ================================ 00:33:20.903 Vendor ID: 0000 00:33:20.903 Subsystem Vendor ID: 0000 00:33:20.903 Serial Number: 2f77e1344c8b46fa5c81 00:33:20.903 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:20.903 Firmware Version: 6.8.9-20 00:33:20.903 Recommended Arb Burst: 6 00:33:20.903 IEEE OUI Identifier: 00 00 00 00:33:20.903 Multi-path I/O 00:33:20.903 May have multiple subsystem ports: Yes 00:33:20.903 May have multiple controllers: Yes 00:33:20.903 Associated with SR-IOV VF: No 00:33:20.903 Max Data Transfer Size: Unlimited 00:33:20.903 Max Number of Namespaces: 1024 00:33:20.903 Max Number of I/O Queues: 128 00:33:20.903 NVMe Specification Version (VS): 1.3 00:33:20.903 NVMe Specification Version (Identify): 1.3 00:33:20.903 Maximum Queue Entries: 1024 00:33:20.903 Contiguous Queues Required: No 00:33:20.903 Arbitration Mechanisms Supported 00:33:20.903 Weighted Round Robin: Not Supported 00:33:20.903 Vendor Specific: Not Supported 00:33:20.903 Reset Timeout: 7500 ms 00:33:20.903 Doorbell Stride: 4 bytes 00:33:20.903 NVM Subsystem Reset: Not Supported 00:33:20.903 Command Sets Supported 00:33:20.903 NVM Command Set: Supported 00:33:20.903 Boot Partition: Not Supported 00:33:20.903 Memory Page Size Minimum: 4096 bytes 00:33:20.903 Memory Page Size Maximum: 4096 bytes 00:33:20.903 Persistent Memory Region: Not Supported 00:33:20.903 Optional Asynchronous Events Supported 00:33:20.903 Namespace Attribute Notices: Supported 00:33:20.903 Firmware Activation Notices: Not Supported 00:33:20.903 ANA Change Notices: Supported 00:33:20.903 PLE Aggregate Log Change Notices: Not Supported 00:33:20.903 LBA Status Info Alert Notices: Not Supported 00:33:20.903 EGE Aggregate Log Change Notices: Not Supported 00:33:20.903 Normal NVM Subsystem Shutdown event: Not Supported 00:33:20.903 Zone Descriptor Change Notices: Not Supported 00:33:20.903 Discovery Log Change Notices: Not Supported 00:33:20.903 Controller Attributes 00:33:20.903 128-bit Host Identifier: Supported 00:33:20.903 Non-Operational Permissive Mode: Not Supported 00:33:20.903 NVM Sets: Not Supported 00:33:20.903 Read Recovery Levels: Not Supported 00:33:20.903 Endurance Groups: Not Supported 00:33:20.903 Predictable Latency Mode: Not Supported 00:33:20.903 Traffic Based Keep ALive: Supported 00:33:20.903 Namespace Granularity: Not Supported 00:33:20.903 SQ Associations: Not Supported 00:33:20.903 UUID List: Not Supported 00:33:20.903 Multi-Domain Subsystem: Not Supported 00:33:20.903 Fixed Capacity Management: Not Supported 00:33:20.903 Variable Capacity Management: Not Supported 00:33:20.903 Delete Endurance Group: Not Supported 00:33:20.903 Delete NVM Set: Not Supported 00:33:20.903 Extended LBA Formats Supported: Not Supported 00:33:20.903 Flexible Data Placement Supported: Not Supported 00:33:20.903 00:33:20.903 Controller Memory Buffer Support 00:33:20.903 ================================ 00:33:20.903 Supported: No 00:33:20.903 00:33:20.903 Persistent Memory Region Support 00:33:20.903 ================================ 00:33:20.903 Supported: No 00:33:20.903 00:33:20.903 Admin Command Set Attributes 00:33:20.903 ============================ 00:33:20.903 Security Send/Receive: Not Supported 00:33:20.903 Format NVM: Not Supported 00:33:20.903 Firmware Activate/Download: Not Supported 00:33:20.903 Namespace Management: Not Supported 00:33:20.903 Device Self-Test: Not Supported 00:33:20.903 Directives: Not Supported 00:33:20.903 NVMe-MI: Not Supported 00:33:20.903 Virtualization Management: Not Supported 00:33:20.903 Doorbell Buffer Config: Not Supported 00:33:20.903 Get LBA Status Capability: Not Supported 00:33:20.903 Command & Feature Lockdown Capability: Not Supported 00:33:20.903 Abort Command Limit: 4 00:33:20.903 Async Event Request Limit: 4 00:33:20.903 Number of Firmware Slots: N/A 00:33:20.903 Firmware Slot 1 Read-Only: N/A 00:33:20.903 Firmware Activation Without Reset: N/A 00:33:20.903 Multiple Update Detection Support: N/A 00:33:20.903 Firmware Update Granularity: No Information Provided 00:33:20.903 Per-Namespace SMART Log: Yes 00:33:20.903 Asymmetric Namespace Access Log Page: Supported 00:33:20.903 ANA Transition Time : 10 sec 00:33:20.903 00:33:20.903 Asymmetric Namespace Access Capabilities 00:33:20.903 ANA Optimized State : Supported 00:33:20.903 ANA Non-Optimized State : Supported 00:33:20.903 ANA Inaccessible State : Supported 00:33:20.903 ANA Persistent Loss State : Supported 00:33:20.903 ANA Change State : Supported 00:33:20.903 ANAGRPID is not changed : No 00:33:20.903 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:20.903 00:33:20.903 ANA Group Identifier Maximum : 128 00:33:20.903 Number of ANA Group Identifiers : 128 00:33:20.903 Max Number of Allowed Namespaces : 1024 00:33:20.903 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:20.903 Command Effects Log Page: Supported 00:33:20.903 Get Log Page Extended Data: Supported 00:33:20.903 Telemetry Log Pages: Not Supported 00:33:20.903 Persistent Event Log Pages: Not Supported 00:33:20.903 Supported Log Pages Log Page: May Support 00:33:20.903 Commands Supported & Effects Log Page: Not Supported 00:33:20.903 Feature Identifiers & Effects Log Page:May Support 00:33:20.903 NVMe-MI Commands & Effects Log Page: May Support 00:33:20.903 Data Area 4 for Telemetry Log: Not Supported 00:33:20.903 Error Log Page Entries Supported: 128 00:33:20.903 Keep Alive: Supported 00:33:20.903 Keep Alive Granularity: 1000 ms 00:33:20.903 00:33:20.903 NVM Command Set Attributes 00:33:20.903 ========================== 00:33:20.903 Submission Queue Entry Size 00:33:20.903 Max: 64 00:33:20.903 Min: 64 00:33:20.903 Completion Queue Entry Size 00:33:20.903 Max: 16 00:33:20.903 Min: 16 00:33:20.903 Number of Namespaces: 1024 00:33:20.903 Compare Command: Not Supported 00:33:20.903 Write Uncorrectable Command: Not Supported 00:33:20.903 Dataset Management Command: Supported 00:33:20.903 Write Zeroes Command: Supported 00:33:20.903 Set Features Save Field: Not Supported 00:33:20.903 Reservations: Not Supported 00:33:20.903 Timestamp: Not Supported 00:33:20.903 Copy: Not Supported 00:33:20.903 Volatile Write Cache: Present 00:33:20.903 Atomic Write Unit (Normal): 1 00:33:20.903 Atomic Write Unit (PFail): 1 00:33:20.903 Atomic Compare & Write Unit: 1 00:33:20.903 Fused Compare & Write: Not Supported 00:33:20.903 Scatter-Gather List 00:33:20.903 SGL Command Set: Supported 00:33:20.903 SGL Keyed: Not Supported 00:33:20.903 SGL Bit Bucket Descriptor: Not Supported 00:33:20.903 SGL Metadata Pointer: Not Supported 00:33:20.903 Oversized SGL: Not Supported 00:33:20.903 SGL Metadata Address: Not Supported 00:33:20.903 SGL Offset: Supported 00:33:20.903 Transport SGL Data Block: Not Supported 00:33:20.903 Replay Protected Memory Block: Not Supported 00:33:20.903 00:33:20.903 Firmware Slot Information 00:33:20.903 ========================= 00:33:20.903 Active slot: 0 00:33:20.903 00:33:20.903 Asymmetric Namespace Access 00:33:20.903 =========================== 00:33:20.903 Change Count : 0 00:33:20.903 Number of ANA Group Descriptors : 1 00:33:20.903 ANA Group Descriptor : 0 00:33:20.903 ANA Group ID : 1 00:33:20.903 Number of NSID Values : 1 00:33:20.903 Change Count : 0 00:33:20.903 ANA State : 1 00:33:20.903 Namespace Identifier : 1 00:33:20.903 00:33:20.903 Commands Supported and Effects 00:33:20.903 ============================== 00:33:20.903 Admin Commands 00:33:20.903 -------------- 00:33:20.903 Get Log Page (02h): Supported 00:33:20.903 Identify (06h): Supported 00:33:20.903 Abort (08h): Supported 00:33:20.903 Set Features (09h): Supported 00:33:20.904 Get Features (0Ah): Supported 00:33:20.904 Asynchronous Event Request (0Ch): Supported 00:33:20.904 Keep Alive (18h): Supported 00:33:20.904 I/O Commands 00:33:20.904 ------------ 00:33:20.904 Flush (00h): Supported 00:33:20.904 Write (01h): Supported LBA-Change 00:33:20.904 Read (02h): Supported 00:33:20.904 Write Zeroes (08h): Supported LBA-Change 00:33:20.904 Dataset Management (09h): Supported 00:33:20.904 00:33:20.904 Error Log 00:33:20.904 ========= 00:33:20.904 Entry: 0 00:33:20.904 Error Count: 0x3 00:33:20.904 Submission Queue Id: 0x0 00:33:20.904 Command Id: 0x5 00:33:20.904 Phase Bit: 0 00:33:20.904 Status Code: 0x2 00:33:20.904 Status Code Type: 0x0 00:33:20.904 Do Not Retry: 1 00:33:20.904 Error Location: 0x28 00:33:20.904 LBA: 0x0 00:33:20.904 Namespace: 0x0 00:33:20.904 Vendor Log Page: 0x0 00:33:20.904 ----------- 00:33:20.904 Entry: 1 00:33:20.904 Error Count: 0x2 00:33:20.904 Submission Queue Id: 0x0 00:33:20.904 Command Id: 0x5 00:33:20.904 Phase Bit: 0 00:33:20.904 Status Code: 0x2 00:33:20.904 Status Code Type: 0x0 00:33:20.904 Do Not Retry: 1 00:33:20.904 Error Location: 0x28 00:33:20.904 LBA: 0x0 00:33:20.904 Namespace: 0x0 00:33:20.904 Vendor Log Page: 0x0 00:33:20.904 ----------- 00:33:20.904 Entry: 2 00:33:20.904 Error Count: 0x1 00:33:20.904 Submission Queue Id: 0x0 00:33:20.904 Command Id: 0x4 00:33:20.904 Phase Bit: 0 00:33:20.904 Status Code: 0x2 00:33:20.904 Status Code Type: 0x0 00:33:20.904 Do Not Retry: 1 00:33:20.904 Error Location: 0x28 00:33:20.904 LBA: 0x0 00:33:20.904 Namespace: 0x0 00:33:20.904 Vendor Log Page: 0x0 00:33:20.904 00:33:20.904 Number of Queues 00:33:20.904 ================ 00:33:20.904 Number of I/O Submission Queues: 128 00:33:20.904 Number of I/O Completion Queues: 128 00:33:20.904 00:33:20.904 ZNS Specific Controller Data 00:33:20.904 ============================ 00:33:20.904 Zone Append Size Limit: 0 00:33:20.904 00:33:20.904 00:33:20.904 Active Namespaces 00:33:20.904 ================= 00:33:20.904 get_feature(0x05) failed 00:33:20.904 Namespace ID:1 00:33:20.904 Command Set Identifier: NVM (00h) 00:33:20.904 Deallocate: Supported 00:33:20.904 Deallocated/Unwritten Error: Not Supported 00:33:20.904 Deallocated Read Value: Unknown 00:33:20.904 Deallocate in Write Zeroes: Not Supported 00:33:20.904 Deallocated Guard Field: 0xFFFF 00:33:20.904 Flush: Supported 00:33:20.904 Reservation: Not Supported 00:33:20.904 Namespace Sharing Capabilities: Multiple Controllers 00:33:20.904 Size (in LBAs): 3125627568 (1490GiB) 00:33:20.904 Capacity (in LBAs): 3125627568 (1490GiB) 00:33:20.904 Utilization (in LBAs): 3125627568 (1490GiB) 00:33:20.904 UUID: 75ea4116-7e8e-4a75-8a4e-c9520a3afefa 00:33:20.904 Thin Provisioning: Not Supported 00:33:20.904 Per-NS Atomic Units: Yes 00:33:20.904 Atomic Boundary Size (Normal): 0 00:33:20.904 Atomic Boundary Size (PFail): 0 00:33:20.904 Atomic Boundary Offset: 0 00:33:20.904 NGUID/EUI64 Never Reused: No 00:33:20.904 ANA group ID: 1 00:33:20.904 Namespace Write Protected: No 00:33:20.904 Number of LBA Formats: 1 00:33:20.904 Current LBA Format: LBA Format #00 00:33:20.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:20.904 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.904 rmmod nvme_tcp 00:33:20.904 rmmod nvme_fabrics 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.904 00:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:23.441 00:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:26.732 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:26.732 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:28.121 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:33:28.381 00:33:28.381 real 0m20.318s 00:33:28.381 user 0m4.953s 00:33:28.381 sys 0m10.955s 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.381 ************************************ 00:33:28.381 END TEST nvmf_identify_kernel_target 00:33:28.381 ************************************ 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.381 ************************************ 00:33:28.381 START TEST nvmf_auth_host 00:33:28.381 ************************************ 00:33:28.381 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:28.641 * Looking for test storage... 00:33:28.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:28.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.641 --rc genhtml_branch_coverage=1 00:33:28.641 --rc genhtml_function_coverage=1 00:33:28.641 --rc genhtml_legend=1 00:33:28.641 --rc geninfo_all_blocks=1 00:33:28.641 --rc geninfo_unexecuted_blocks=1 00:33:28.641 00:33:28.641 ' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:28.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.641 --rc genhtml_branch_coverage=1 00:33:28.641 --rc genhtml_function_coverage=1 00:33:28.641 --rc genhtml_legend=1 00:33:28.641 --rc geninfo_all_blocks=1 00:33:28.641 --rc geninfo_unexecuted_blocks=1 00:33:28.641 00:33:28.641 ' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:28.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.641 --rc genhtml_branch_coverage=1 00:33:28.641 --rc genhtml_function_coverage=1 00:33:28.641 --rc genhtml_legend=1 00:33:28.641 --rc geninfo_all_blocks=1 00:33:28.641 --rc geninfo_unexecuted_blocks=1 00:33:28.641 00:33:28.641 ' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:28.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.641 --rc genhtml_branch_coverage=1 00:33:28.641 --rc genhtml_function_coverage=1 00:33:28.641 --rc genhtml_legend=1 00:33:28.641 --rc geninfo_all_blocks=1 00:33:28.641 --rc geninfo_unexecuted_blocks=1 00:33:28.641 00:33:28.641 ' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.641 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.642 00:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.642 00:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.642 00:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.642 00:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.642 00:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.767 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.768 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.768 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.768 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.768 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.768 00:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:33:36.768 00:33:36.768 --- 10.0.0.2 ping statistics --- 00:33:36.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.768 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:33:36.768 00:33:36.768 --- 10.0.0.1 ping statistics --- 00:33:36.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.768 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=564584 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 564584 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 564584 ']' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.768 00:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.768 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.768 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:36.768 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b841f91e804ec614a8d1d7be57f62f2e 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tfw 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b841f91e804ec614a8d1d7be57f62f2e 0 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b841f91e804ec614a8d1d7be57f62f2e 0 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b841f91e804ec614a8d1d7be57f62f2e 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tfw 00:33:36.769 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tfw 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tfw 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=247a82c1e58a84c86e1a8d3e8627dd0ef813e0ff50dd105ea876255e1469ef11 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OQp 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 247a82c1e58a84c86e1a8d3e8627dd0ef813e0ff50dd105ea876255e1469ef11 3 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 247a82c1e58a84c86e1a8d3e8627dd0ef813e0ff50dd105ea876255e1469ef11 3 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=247a82c1e58a84c86e1a8d3e8627dd0ef813e0ff50dd105ea876255e1469ef11 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OQp 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OQp 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.OQp 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=14e6abc8b9bd1eed336d30e9be2ee2d148089c3d895dfd6d 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jhi 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 14e6abc8b9bd1eed336d30e9be2ee2d148089c3d895dfd6d 0 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 14e6abc8b9bd1eed336d30e9be2ee2d148089c3d895dfd6d 0 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=14e6abc8b9bd1eed336d30e9be2ee2d148089c3d895dfd6d 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jhi 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jhi 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jhi 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca7adbf1e58cf37a6456597da1d4a33093fa64fd5e25b8f5 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1Fo 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca7adbf1e58cf37a6456597da1d4a33093fa64fd5e25b8f5 2 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca7adbf1e58cf37a6456597da1d4a33093fa64fd5e25b8f5 2 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca7adbf1e58cf37a6456597da1d4a33093fa64fd5e25b8f5 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1Fo 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1Fo 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1Fo 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=40d6dcd16afd3a0d4267e11c0bae64bd 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7s9 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 40d6dcd16afd3a0d4267e11c0bae64bd 1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 40d6dcd16afd3a0d4267e11c0bae64bd 1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=40d6dcd16afd3a0d4267e11c0bae64bd 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:37.029 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7s9 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7s9 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7s9 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2605c979d498d9244fb9a2a7b0f4dd97 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zw6 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2605c979d498d9244fb9a2a7b0f4dd97 1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2605c979d498d9244fb9a2a7b0f4dd97 1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2605c979d498d9244fb9a2a7b0f4dd97 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zw6 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zw6 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Zw6 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5da5a23bfbcaff13d3af2423d7cf74b837703ecf748716fa 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SmX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5da5a23bfbcaff13d3af2423d7cf74b837703ecf748716fa 2 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5da5a23bfbcaff13d3af2423d7cf74b837703ecf748716fa 2 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5da5a23bfbcaff13d3af2423d7cf74b837703ecf748716fa 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SmX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SmX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SmX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=979af32f3060e5431dc1b5d3272979ca 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.moc 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 979af32f3060e5431dc1b5d3272979ca 0 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 979af32f3060e5431dc1b5d3272979ca 0 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=979af32f3060e5431dc1b5d3272979ca 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.moc 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.moc 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.moc 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7266c07fa63389e8ee5bac16d220c045f3156d005dda3c07c2e89398b784da8 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.z4G 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7266c07fa63389e8ee5bac16d220c045f3156d005dda3c07c2e89398b784da8 3 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7266c07fa63389e8ee5bac16d220c045f3156d005dda3c07c2e89398b784da8 3 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7266c07fa63389e8ee5bac16d220c045f3156d005dda3c07c2e89398b784da8 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:37.289 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.z4G 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.z4G 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.z4G 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 564584 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 564584 ']' 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tfw 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.OQp ]] 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OQp 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.548 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.549 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.549 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.549 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jhi 00:33:37.549 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.549 00:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1Fo ]] 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Fo 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7s9 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Zw6 ]] 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zw6 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.549 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SmX 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.moc ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.moc 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.z4G 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:37.810 00:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:41.101 Waiting for block devices as requested 00:33:41.101 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:41.101 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:41.101 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:41.101 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:41.360 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:41.360 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:41.360 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:41.619 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:41.619 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:41.619 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:41.878 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:41.878 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:41.878 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:42.137 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:42.137 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:42.137 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:42.396 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:42.965 No valid GPT data, bailing 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:42.965 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:43.227 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:43.227 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:43.227 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:43.227 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:43.227 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:33:43.228 00:33:43.228 Discovery Log Number of Records 2, Generation counter 2 00:33:43.228 =====Discovery Log Entry 0====== 00:33:43.228 trtype: tcp 00:33:43.228 adrfam: ipv4 00:33:43.228 subtype: current discovery subsystem 00:33:43.228 treq: not specified, sq flow control disable supported 00:33:43.228 portid: 1 00:33:43.228 trsvcid: 4420 00:33:43.228 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:43.228 traddr: 10.0.0.1 00:33:43.228 eflags: none 00:33:43.228 sectype: none 00:33:43.228 =====Discovery Log Entry 1====== 00:33:43.228 trtype: tcp 00:33:43.228 adrfam: ipv4 00:33:43.228 subtype: nvme subsystem 00:33:43.228 treq: not specified, sq flow control disable supported 00:33:43.228 portid: 1 00:33:43.228 trsvcid: 4420 00:33:43.228 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:43.228 traddr: 10.0.0.1 00:33:43.228 eflags: none 00:33:43.228 sectype: none 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.228 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.489 nvme0n1 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.489 00:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.749 nvme0n1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.749 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.009 nvme0n1 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.009 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.010 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 nvme0n1 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 nvme0n1 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.270 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.530 nvme0n1 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.530 00:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:44.788 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.789 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.048 nvme0n1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.048 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.308 nvme0n1 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.308 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.567 nvme0n1 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.567 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.826 nvme0n1 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.826 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.827 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.087 nvme0n1 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.087 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.655 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.656 00:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.915 nvme0n1 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.915 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.916 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.174 nvme0n1 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.174 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.175 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.434 nvme0n1 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.434 00:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.694 nvme0n1 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.694 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.951 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.951 nvme0n1 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.209 00:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.585 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.586 00:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.846 nvme0n1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.846 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.106 nvme0n1 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.106 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.366 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.367 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.367 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.367 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.626 nvme0n1 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.626 00:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.626 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.194 nvme0n1 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.194 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.454 nvme0n1 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.454 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.717 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.718 00:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 nvme0n1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.288 00:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.857 nvme0n1 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.857 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.858 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.427 nvme0n1 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.427 00:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.995 nvme0n1 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.995 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.253 00:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 nvme0n1 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 nvme0n1 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.822 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.081 nvme0n1 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:55.081 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.341 nvme0n1 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.341 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.342 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.601 nvme0n1 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.601 00:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.601 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.860 nvme0n1 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.860 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.861 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.120 nvme0n1 00:33:56.120 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.120 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.120 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.121 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.380 nvme0n1 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.380 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.639 nvme0n1 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.639 00:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.639 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.899 nvme0n1 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.899 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.159 nvme0n1 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.159 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.419 nvme0n1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.419 00:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.678 nvme0n1 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.678 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.937 nvme0n1 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.937 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.197 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.456 nvme0n1 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.456 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.457 00:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 nvme0n1 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.285 nvme0n1 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:33:59.285 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.286 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.544 nvme0n1 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.544 00:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.544 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.544 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.544 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.544 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.806 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.067 nvme0n1 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.067 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.636 nvme0n1 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.636 00:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.896 nvme0n1 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.896 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.155 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.725 nvme0n1 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.725 00:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.725 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.294 nvme0n1 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.294 00:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.864 nvme0n1 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:02.864 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.865 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.434 nvme0n1 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.434 00:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.003 nvme0n1 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.003 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.263 nvme0n1 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.263 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.264 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.523 nvme0n1 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.523 00:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.783 nvme0n1 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.783 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.043 nvme0n1 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.043 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.302 nvme0n1 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.302 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.303 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.562 nvme0n1 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.562 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.563 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.563 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.563 00:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.822 nvme0n1 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.822 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.823 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.082 nvme0n1 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.082 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.083 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.343 nvme0n1 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.343 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.602 nvme0n1 00:34:06.602 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.602 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.602 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.602 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.603 00:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 nvme0n1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.862 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 nvme0n1 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:07.121 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.122 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.381 nvme0n1 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:07.381 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.382 00:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.640 nvme0n1 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.640 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.900 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.160 nvme0n1 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.160 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.420 nvme0n1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.420 00:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.989 nvme0n1 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.989 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.249 nvme0n1 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.249 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.508 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.509 00:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.769 nvme0n1 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.769 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.337 nvme0n1 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0MWY5MWU4MDRlYzYxNGE4ZDFkN2JlNTdmNjJmMmW7Opy9: 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ3YTgyYzFlNThhODRjODZlMWE4ZDNlODYyN2RkMGVmODEzZTBmZjUwZGQxMDVlYTg3NjI1NWUxNDY5ZWYxMUyrYJY=: 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.337 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.338 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.338 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.338 00:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.918 nvme0n1 00:34:10.918 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.918 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.919 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.488 nvme0n1 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.488 00:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.056 nvme0n1 00:34:12.056 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.056 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.056 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.056 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWRhNWEyM2JmYmNhZmYxM2QzYWYyNDIzZDdjZjc0YjgzNzcwM2VjZjc0ODcxNmZhiqT5tQ==: 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc5YWYzMmYzMDYwZTU0MzFkYzFiNWQzMjcyOTc5Y2F2ahE+: 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 00:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.625 nvme0n1 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.625 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjcyNjZjMDdmYTYzMzg5ZThlZTViYWMxNmQyMjBjMDQ1ZjMxNTZkMDA1ZGRhM2MwN2MyZTg5Mzk4Yjc4NGRhOH6Sk74=: 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.884 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.452 nvme0n1 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.452 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.453 request: 00:34:13.453 { 00:34:13.453 "name": "nvme0", 00:34:13.453 "trtype": "tcp", 00:34:13.453 "traddr": "10.0.0.1", 00:34:13.453 "adrfam": "ipv4", 00:34:13.453 "trsvcid": "4420", 00:34:13.453 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:13.453 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:13.453 "prchk_reftag": false, 00:34:13.453 "prchk_guard": false, 00:34:13.453 "hdgst": false, 00:34:13.453 "ddgst": false, 00:34:13.453 "allow_unrecognized_csi": false, 00:34:13.453 "method": "bdev_nvme_attach_controller", 00:34:13.453 "req_id": 1 00:34:13.453 } 00:34:13.453 Got JSON-RPC error response 00:34:13.453 response: 00:34:13.453 { 00:34:13.453 "code": -5, 00:34:13.453 "message": "Input/output error" 00:34:13.453 } 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.453 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.712 request: 00:34:13.712 { 00:34:13.712 "name": "nvme0", 00:34:13.712 "trtype": "tcp", 00:34:13.712 "traddr": "10.0.0.1", 00:34:13.712 "adrfam": "ipv4", 00:34:13.712 "trsvcid": "4420", 00:34:13.712 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:13.712 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:13.712 "prchk_reftag": false, 00:34:13.712 "prchk_guard": false, 00:34:13.712 "hdgst": false, 00:34:13.712 "ddgst": false, 00:34:13.712 "dhchap_key": "key2", 00:34:13.712 "allow_unrecognized_csi": false, 00:34:13.712 "method": "bdev_nvme_attach_controller", 00:34:13.712 "req_id": 1 00:34:13.712 } 00:34:13.712 Got JSON-RPC error response 00:34:13.712 response: 00:34:13.712 { 00:34:13.712 "code": -5, 00:34:13.712 "message": "Input/output error" 00:34:13.712 } 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.712 00:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.712 request: 00:34:13.712 { 00:34:13.712 "name": "nvme0", 00:34:13.712 "trtype": "tcp", 00:34:13.712 "traddr": "10.0.0.1", 00:34:13.712 "adrfam": "ipv4", 00:34:13.712 "trsvcid": "4420", 00:34:13.712 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:13.712 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:13.712 "prchk_reftag": false, 00:34:13.712 "prchk_guard": false, 00:34:13.712 "hdgst": false, 00:34:13.712 "ddgst": false, 00:34:13.712 "dhchap_key": "key1", 00:34:13.712 "dhchap_ctrlr_key": "ckey2", 00:34:13.712 "allow_unrecognized_csi": false, 00:34:13.712 "method": "bdev_nvme_attach_controller", 00:34:13.712 "req_id": 1 00:34:13.712 } 00:34:13.712 Got JSON-RPC error response 00:34:13.712 response: 00:34:13.712 { 00:34:13.712 "code": -5, 00:34:13.712 "message": "Input/output error" 00:34:13.712 } 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.712 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.971 nvme0n1 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.971 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.972 request: 00:34:13.972 { 00:34:13.972 "name": "nvme0", 00:34:13.972 "dhchap_key": "key1", 00:34:13.972 "dhchap_ctrlr_key": "ckey2", 00:34:13.972 "method": "bdev_nvme_set_keys", 00:34:13.972 "req_id": 1 00:34:13.972 } 00:34:13.972 Got JSON-RPC error response 00:34:13.972 response: 00:34:13.972 { 00:34:13.972 "code": -13, 00:34:13.972 "message": "Permission denied" 00:34:13.972 } 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:13.972 00:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:15.349 00:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTRlNmFiYzhiOWJkMWVlZDMzNmQzMGU5YmUyZWUyZDE0ODA4OWMzZDg5NWRmZDZk43q4uA==: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E3YWRiZjFlNThjZjM3YTY0NTY1OTdkYTFkNGEzMzA5M2ZhNjRmZDVlMjViOGY1hbbNxA==: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.287 nvme0n1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkNmRjZDE2YWZkM2EwZDQyNjdlMTFjMGJhZTY0YmSvjwf2: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: ]] 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjYwNWM5NzlkNDk4ZDkyNDRmYjlhMmE3YjBmNGRkOTdcbQon: 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.287 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.546 request: 00:34:16.546 { 00:34:16.546 "name": "nvme0", 00:34:16.546 "dhchap_key": "key2", 00:34:16.546 "dhchap_ctrlr_key": "ckey1", 00:34:16.546 "method": "bdev_nvme_set_keys", 00:34:16.546 "req_id": 1 00:34:16.546 } 00:34:16.546 Got JSON-RPC error response 00:34:16.546 response: 00:34:16.546 { 00:34:16.546 "code": -13, 00:34:16.546 "message": "Permission denied" 00:34:16.546 } 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:16.546 00:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.481 rmmod nvme_tcp 00:34:17.481 rmmod nvme_fabrics 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 564584 ']' 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 564584 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 564584 ']' 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 564584 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.481 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564584 00:34:17.740 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:17.740 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:17.740 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564584' 00:34:17.740 killing process with pid 564584 00:34:17.740 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 564584 00:34:17.740 00:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 564584 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.740 00:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:20.276 00:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:23.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.574 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:24.955 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.214 00:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tfw /tmp/spdk.key-null.jhi /tmp/spdk.key-sha256.7s9 /tmp/spdk.key-sha384.SmX /tmp/spdk.key-sha512.z4G /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:25.214 00:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.514 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.514 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:28.774 00:34:28.774 real 1m0.260s 00:34:28.774 user 0m52.439s 00:34:28.774 sys 0m15.958s 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.774 ************************************ 00:34:28.774 END TEST nvmf_auth_host 00:34:28.774 ************************************ 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.774 ************************************ 00:34:28.774 START TEST nvmf_digest 00:34:28.774 ************************************ 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:28.774 * Looking for test storage... 00:34:28.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:34:28.774 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:29.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.034 --rc genhtml_branch_coverage=1 00:34:29.034 --rc genhtml_function_coverage=1 00:34:29.034 --rc genhtml_legend=1 00:34:29.034 --rc geninfo_all_blocks=1 00:34:29.034 --rc geninfo_unexecuted_blocks=1 00:34:29.034 00:34:29.034 ' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:29.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.034 --rc genhtml_branch_coverage=1 00:34:29.034 --rc genhtml_function_coverage=1 00:34:29.034 --rc genhtml_legend=1 00:34:29.034 --rc geninfo_all_blocks=1 00:34:29.034 --rc geninfo_unexecuted_blocks=1 00:34:29.034 00:34:29.034 ' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:29.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.034 --rc genhtml_branch_coverage=1 00:34:29.034 --rc genhtml_function_coverage=1 00:34:29.034 --rc genhtml_legend=1 00:34:29.034 --rc geninfo_all_blocks=1 00:34:29.034 --rc geninfo_unexecuted_blocks=1 00:34:29.034 00:34:29.034 ' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:29.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.034 --rc genhtml_branch_coverage=1 00:34:29.034 --rc genhtml_function_coverage=1 00:34:29.034 --rc genhtml_legend=1 00:34:29.034 --rc geninfo_all_blocks=1 00:34:29.034 --rc geninfo_unexecuted_blocks=1 00:34:29.034 00:34:29.034 ' 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.034 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:29.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.035 00:16:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:37.163 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:37.163 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:37.163 Found net devices under 0000:af:00.0: cvl_0_0 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.163 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:37.164 Found net devices under 0000:af:00.1: cvl_0_1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:34:37.164 00:34:37.164 --- 10.0.0.2 ping statistics --- 00:34:37.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.164 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:37.164 00:34:37.164 --- 10.0.0.1 ping statistics --- 00:34:37.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.164 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.164 ************************************ 00:34:37.164 START TEST nvmf_digest_clean 00:34:37.164 ************************************ 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=579522 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 579522 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 579522 ']' 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.164 00:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.164 [2024-12-10 00:16:20.676177] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:37.164 [2024-12-10 00:16:20.676227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.164 [2024-12-10 00:16:20.771165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.164 [2024-12-10 00:16:20.811035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.164 [2024-12-10 00:16:20.811072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.164 [2024-12-10 00:16:20.811082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.164 [2024-12-10 00:16:20.811090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.164 [2024-12-10 00:16:20.811097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.164 [2024-12-10 00:16:20.811675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.164 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.424 null0 00:34:37.424 [2024-12-10 00:16:21.642606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.424 [2024-12-10 00:16:21.666795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=579800 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 579800 /var/tmp/bperf.sock 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 579800 ']' 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:37.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.424 00:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:37.424 [2024-12-10 00:16:21.723149] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:37.424 [2024-12-10 00:16:21.723194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579800 ] 00:34:37.424 [2024-12-10 00:16:21.812897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.424 [2024-12-10 00:16:21.852759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.362 00:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.934 nvme0n1 00:34:38.934 00:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:38.934 00:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:38.934 Running I/O for 2 seconds... 00:34:40.881 25481.00 IOPS, 99.54 MiB/s [2024-12-09T23:16:25.354Z] 25847.00 IOPS, 100.96 MiB/s 00:34:40.881 Latency(us) 00:34:40.881 [2024-12-09T23:16:25.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:40.881 nvme0n1 : 2.00 25862.39 101.02 0.00 0.00 4944.56 2451.05 11062.48 00:34:40.881 [2024-12-09T23:16:25.354Z] =================================================================================================================== 00:34:40.881 [2024-12-09T23:16:25.354Z] Total : 25862.39 101.02 0.00 0.00 4944.56 2451.05 11062.48 00:34:40.881 { 00:34:40.881 "results": [ 00:34:40.881 { 00:34:40.881 "job": "nvme0n1", 00:34:40.881 "core_mask": "0x2", 00:34:40.881 "workload": "randread", 00:34:40.881 "status": "finished", 00:34:40.881 "queue_depth": 128, 00:34:40.881 "io_size": 4096, 00:34:40.881 "runtime": 2.003759, 00:34:40.881 "iops": 25862.391634922165, 00:34:40.881 "mibps": 101.02496732391471, 00:34:40.881 "io_failed": 0, 00:34:40.881 "io_timeout": 0, 00:34:40.881 "avg_latency_us": 4944.560436571341, 00:34:40.881 "min_latency_us": 2451.0464, 00:34:40.881 "max_latency_us": 11062.4768 00:34:40.881 } 00:34:40.881 ], 00:34:40.881 "core_count": 1 00:34:40.881 } 00:34:40.881 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:40.881 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:40.881 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:40.881 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:40.881 | select(.opcode=="crc32c") 00:34:40.881 | "\(.module_name) \(.executed)"' 00:34:40.881 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 579800 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 579800 ']' 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 579800 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579800 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579800' 00:34:41.186 killing process with pid 579800 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 579800 00:34:41.186 Received shutdown signal, test time was about 2.000000 seconds 00:34:41.186 00:34:41.186 Latency(us) 00:34:41.186 [2024-12-09T23:16:25.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.186 [2024-12-09T23:16:25.659Z] =================================================================================================================== 00:34:41.186 [2024-12-09T23:16:25.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:41.186 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 579800 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=580349 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 580349 /var/tmp/bperf.sock 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 580349 ']' 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:41.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.511 00:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.511 [2024-12-10 00:16:25.716341] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:41.511 [2024-12-10 00:16:25.716394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580349 ] 00:34:41.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:41.511 Zero copy mechanism will not be used. 00:34:41.511 [2024-12-10 00:16:25.809259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.511 [2024-12-10 00:16:25.849227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.079 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.079 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:42.079 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.079 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.079 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.338 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.338 00:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.905 nvme0n1 00:34:42.906 00:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:42.906 00:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.906 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.906 Zero copy mechanism will not be used. 00:34:42.906 Running I/O for 2 seconds... 00:34:44.779 6014.00 IOPS, 751.75 MiB/s [2024-12-09T23:16:29.252Z] 5790.50 IOPS, 723.81 MiB/s 00:34:44.779 Latency(us) 00:34:44.779 [2024-12-09T23:16:29.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:44.779 nvme0n1 : 2.00 5789.60 723.70 0.00 0.00 2761.04 638.98 9961.47 00:34:44.779 [2024-12-09T23:16:29.252Z] =================================================================================================================== 00:34:44.779 [2024-12-09T23:16:29.252Z] Total : 5789.60 723.70 0.00 0.00 2761.04 638.98 9961.47 00:34:44.779 { 00:34:44.779 "results": [ 00:34:44.779 { 00:34:44.779 "job": "nvme0n1", 00:34:44.779 "core_mask": "0x2", 00:34:44.779 "workload": "randread", 00:34:44.779 "status": "finished", 00:34:44.779 "queue_depth": 16, 00:34:44.779 "io_size": 131072, 00:34:44.779 "runtime": 2.003074, 00:34:44.779 "iops": 5789.6013826748285, 00:34:44.779 "mibps": 723.7001728343536, 00:34:44.779 "io_failed": 0, 00:34:44.779 "io_timeout": 0, 00:34:44.779 "avg_latency_us": 2761.0396763300855, 00:34:44.779 "min_latency_us": 638.976, 00:34:44.779 "max_latency_us": 9961.472 00:34:44.780 } 00:34:44.780 ], 00:34:44.780 "core_count": 1 00:34:44.780 } 00:34:44.780 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:44.780 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:44.780 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:44.780 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:44.780 | select(.opcode=="crc32c") 00:34:44.780 | "\(.module_name) \(.executed)"' 00:34:44.780 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 580349 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 580349 ']' 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 580349 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580349 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580349' 00:34:45.039 killing process with pid 580349 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 580349 00:34:45.039 Received shutdown signal, test time was about 2.000000 seconds 00:34:45.039 00:34:45.039 Latency(us) 00:34:45.039 [2024-12-09T23:16:29.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.039 [2024-12-09T23:16:29.512Z] =================================================================================================================== 00:34:45.039 [2024-12-09T23:16:29.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.039 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 580349 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=581143 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 581143 /var/tmp/bperf.sock 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 581143 ']' 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.298 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:45.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:45.299 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.299 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.299 [2024-12-10 00:16:29.693798] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:45.299 [2024-12-10 00:16:29.693856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581143 ] 00:34:45.557 [2024-12-10 00:16:29.782531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.557 [2024-12-10 00:16:29.822584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.557 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.557 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:45.557 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:45.557 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:45.557 00:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:45.816 00:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.816 00:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.075 nvme0n1 00:34:46.075 00:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:46.075 00:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:46.334 Running I/O for 2 seconds... 00:34:48.209 27442.00 IOPS, 107.20 MiB/s [2024-12-09T23:16:32.682Z] 27677.00 IOPS, 108.11 MiB/s 00:34:48.209 Latency(us) 00:34:48.209 [2024-12-09T23:16:32.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.209 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:48.209 nvme0n1 : 2.01 27678.20 108.12 0.00 0.00 4616.80 3486.52 10433.33 00:34:48.209 [2024-12-09T23:16:32.682Z] =================================================================================================================== 00:34:48.209 [2024-12-09T23:16:32.682Z] Total : 27678.20 108.12 0.00 0.00 4616.80 3486.52 10433.33 00:34:48.209 { 00:34:48.209 "results": [ 00:34:48.209 { 00:34:48.209 "job": "nvme0n1", 00:34:48.209 "core_mask": "0x2", 00:34:48.209 "workload": "randwrite", 00:34:48.209 "status": "finished", 00:34:48.209 "queue_depth": 128, 00:34:48.209 "io_size": 4096, 00:34:48.209 "runtime": 2.005694, 00:34:48.209 "iops": 27678.200164132715, 00:34:48.209 "mibps": 108.11796939114342, 00:34:48.209 "io_failed": 0, 00:34:48.209 "io_timeout": 0, 00:34:48.209 "avg_latency_us": 4616.796225298123, 00:34:48.209 "min_latency_us": 3486.5152, 00:34:48.209 "max_latency_us": 10433.3312 00:34:48.209 } 00:34:48.209 ], 00:34:48.209 "core_count": 1 00:34:48.209 } 00:34:48.209 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:48.209 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:48.209 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:48.209 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:48.209 | select(.opcode=="crc32c") 00:34:48.209 | "\(.module_name) \(.executed)"' 00:34:48.209 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 581143 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 581143 ']' 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 581143 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 581143 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 581143' 00:34:48.468 killing process with pid 581143 00:34:48.468 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 581143 00:34:48.468 Received shutdown signal, test time was about 2.000000 seconds 00:34:48.468 00:34:48.468 Latency(us) 00:34:48.468 [2024-12-09T23:16:32.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.469 [2024-12-09T23:16:32.942Z] =================================================================================================================== 00:34:48.469 [2024-12-09T23:16:32.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:48.469 00:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 581143 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=581686 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 581686 /var/tmp/bperf.sock 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 581686 ']' 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.728 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.728 [2024-12-10 00:16:33.117888] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:48.728 [2024-12-10 00:16:33.117940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581686 ] 00:34:48.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:48.728 Zero copy mechanism will not be used. 00:34:48.987 [2024-12-10 00:16:33.210789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.987 [2024-12-10 00:16:33.251203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.555 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.555 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:49.555 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:49.555 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:49.555 00:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:49.814 00:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.814 00:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.083 nvme0n1 00:34:50.083 00:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:50.083 00:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:50.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.348 Zero copy mechanism will not be used. 00:34:50.348 Running I/O for 2 seconds... 00:34:52.228 6553.00 IOPS, 819.12 MiB/s [2024-12-09T23:16:36.701Z] 6314.50 IOPS, 789.31 MiB/s 00:34:52.228 Latency(us) 00:34:52.228 [2024-12-09T23:16:36.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.228 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:52.228 nvme0n1 : 2.00 6312.40 789.05 0.00 0.00 2530.35 1913.65 7497.32 00:34:52.228 [2024-12-09T23:16:36.701Z] =================================================================================================================== 00:34:52.228 [2024-12-09T23:16:36.701Z] Total : 6312.40 789.05 0.00 0.00 2530.35 1913.65 7497.32 00:34:52.228 { 00:34:52.228 "results": [ 00:34:52.228 { 00:34:52.228 "job": "nvme0n1", 00:34:52.228 "core_mask": "0x2", 00:34:52.228 "workload": "randwrite", 00:34:52.228 "status": "finished", 00:34:52.228 "queue_depth": 16, 00:34:52.228 "io_size": 131072, 00:34:52.228 "runtime": 2.003993, 00:34:52.228 "iops": 6312.39729879296, 00:34:52.228 "mibps": 789.04966234912, 00:34:52.228 "io_failed": 0, 00:34:52.228 "io_timeout": 0, 00:34:52.228 "avg_latency_us": 2530.354285280632, 00:34:52.228 "min_latency_us": 1913.6512, 00:34:52.228 "max_latency_us": 7497.3184 00:34:52.228 } 00:34:52.228 ], 00:34:52.228 "core_count": 1 00:34:52.228 } 00:34:52.228 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:52.228 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:52.228 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:52.228 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:52.228 | select(.opcode=="crc32c") 00:34:52.228 | "\(.module_name) \(.executed)"' 00:34:52.228 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 581686 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 581686 ']' 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 581686 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 581686 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 581686' 00:34:52.488 killing process with pid 581686 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 581686 00:34:52.488 Received shutdown signal, test time was about 2.000000 seconds 00:34:52.488 00:34:52.488 Latency(us) 00:34:52.488 [2024-12-09T23:16:36.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.488 [2024-12-09T23:16:36.961Z] =================================================================================================================== 00:34:52.488 [2024-12-09T23:16:36.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:52.488 00:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 581686 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 579522 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 579522 ']' 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 579522 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579522 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579522' 00:34:52.747 killing process with pid 579522 00:34:52.747 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 579522 00:34:52.748 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 579522 00:34:53.008 00:34:53.008 real 0m16.687s 00:34:53.008 user 0m31.462s 00:34:53.008 sys 0m5.298s 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.008 ************************************ 00:34:53.008 END TEST nvmf_digest_clean 00:34:53.008 ************************************ 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.008 ************************************ 00:34:53.008 START TEST nvmf_digest_error 00:34:53.008 ************************************ 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=582409 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 582409 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 582409 ']' 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.008 00:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:53.008 [2024-12-10 00:16:37.445181] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:53.008 [2024-12-10 00:16:37.445226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.268 [2024-12-10 00:16:37.541210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.268 [2024-12-10 00:16:37.581104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.268 [2024-12-10 00:16:37.581140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.268 [2024-12-10 00:16:37.581149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.268 [2024-12-10 00:16:37.581158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.268 [2024-12-10 00:16:37.581165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.268 [2024-12-10 00:16:37.581788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.837 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.837 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:53.837 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.837 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.837 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.097 [2024-12-10 00:16:38.319944] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.097 null0 00:34:54.097 [2024-12-10 00:16:38.415312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.097 [2024-12-10 00:16:38.439512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=582541 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 582541 /var/tmp/bperf.sock 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 582541 ']' 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.097 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.097 [2024-12-10 00:16:38.491085] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:54.097 [2024-12-10 00:16:38.491129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582541 ] 00:34:54.357 [2024-12-10 00:16:38.580346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.357 [2024-12-10 00:16:38.620132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.357 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.357 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:54.357 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.357 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.616 00:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.875 nvme0n1 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:54.875 00:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.136 Running I/O for 2 seconds... 00:34:55.136 [2024-12-10 00:16:39.381042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.381077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.381090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.392326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.392351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.392364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.401344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.401366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.401377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.413591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.413614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.413625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.424864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.424898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.433377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.433398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.433409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.445223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.445247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.445259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.456184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.456217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.464622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.464644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.464658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.476239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.476260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.476271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.484926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.484947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.484958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.495752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.495773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.495784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.503862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.503884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.503894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.512722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.512753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.522206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.522227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.522238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.532181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.532202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.541028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.541048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.541059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.550171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.550196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.550207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.558951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.558971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.558982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.568003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.568024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.136 [2024-12-10 00:16:39.568034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.136 [2024-12-10 00:16:39.576249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.136 [2024-12-10 00:16:39.576270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.137 [2024-12-10 00:16:39.576280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.137 [2024-12-10 00:16:39.587241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.137 [2024-12-10 00:16:39.587262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.137 [2024-12-10 00:16:39.587272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.137 [2024-12-10 00:16:39.595095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.137 [2024-12-10 00:16:39.595116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.137 [2024-12-10 00:16:39.595127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.137 [2024-12-10 00:16:39.606659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.137 [2024-12-10 00:16:39.606680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.137 [2024-12-10 00:16:39.606691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.615091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.397 [2024-12-10 00:16:39.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.625877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.625899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.397 [2024-12-10 00:16:39.625913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.637179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.637199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.397 [2024-12-10 00:16:39.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.646384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.646405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.397 [2024-12-10 00:16:39.646416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.657498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.657519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.397 [2024-12-10 00:16:39.657530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.397 [2024-12-10 00:16:39.669193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.397 [2024-12-10 00:16:39.669216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.669227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.677618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.677639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.677650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.687744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.687764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.687775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.695744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.695775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.707186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.707208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.707218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.719043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.719068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.719078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.730114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.730135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.730145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.738428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.738449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.738460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.750734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.750756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.750768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.762266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.762288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.762298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.770746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.770767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.780643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.780665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.780676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.791570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.791593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.791603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.803154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.803175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.803185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.814561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.814582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.814592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.823068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.823090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.823101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.835467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.835488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.835498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.844627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.844647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.844657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.852401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.852423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.852433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.398 [2024-12-10 00:16:39.862035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.398 [2024-12-10 00:16:39.862056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.398 [2024-12-10 00:16:39.862067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.871882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.871904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.871914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.883590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.883612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.883622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.894291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.894312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.894326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.903275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.903296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.903306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.913409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.913430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.913440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.923259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.923280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.923291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.932715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.932737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.932747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.940816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.940843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.940854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.950433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.950455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.950465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.959218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.959238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.959249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.967808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.967835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.967845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.977827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.977852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.986007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.986035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.986046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:39.996248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:39.996269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:39.996280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.004532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.004554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.015947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.015968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.015978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.029011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.029033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.029044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.039323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.039356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.047221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.047243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.059018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.059041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.059052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.070077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.070099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.070110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.078543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.078574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.088104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.088125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.088135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.096375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.659 [2024-12-10 00:16:40.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.659 [2024-12-10 00:16:40.096407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.659 [2024-12-10 00:16:40.105670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.660 [2024-12-10 00:16:40.105692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.660 [2024-12-10 00:16:40.105704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.660 [2024-12-10 00:16:40.115229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.660 [2024-12-10 00:16:40.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.660 [2024-12-10 00:16:40.115261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.660 [2024-12-10 00:16:40.124309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.660 [2024-12-10 00:16:40.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.660 [2024-12-10 00:16:40.124341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.134028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.134050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.144532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.144558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.144569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.155913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.155935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.155946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.164063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.164085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.164095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.176325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.176347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.176358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.186549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.186571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.186582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.196178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.196201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.196212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.205108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.205130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.205140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.214003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.214036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.223436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.223458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.223468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.232586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.232609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.232621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.243301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.243323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.243334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.255863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.255886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.255896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.263940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.263963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.263974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.274257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.274280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.274290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.284460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.284482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.284493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.295411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.295433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.295444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.303922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.303943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.303954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.315578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.315602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.315616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.327797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.327819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.327836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.335902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.335924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.335934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.347642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.347665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.347675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 [2024-12-10 00:16:40.357608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.921 [2024-12-10 00:16:40.357630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.921 [2024-12-10 00:16:40.357651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.921 25398.00 IOPS, 99.21 MiB/s [2024-12-09T23:16:40.394Z] [2024-12-10 00:16:40.367036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.922 [2024-12-10 00:16:40.367060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.922 [2024-12-10 00:16:40.367071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.922 [2024-12-10 00:16:40.377034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.922 [2024-12-10 00:16:40.377056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.922 [2024-12-10 00:16:40.377067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.922 [2024-12-10 00:16:40.386021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:55.922 [2024-12-10 00:16:40.386043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.922 [2024-12-10 00:16:40.386055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.395761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.395783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.395794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.405492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.405517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.414024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.414046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.414056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.425046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.425069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.425079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.433242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.433264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.444425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.444446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.444456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.454550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.454573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.454583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.463758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.463781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.463792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.472658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.472680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.482031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.482067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.491647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.491669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.491680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.500882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.500904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.500915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.509539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.509562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.509573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.518504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.518526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.182 [2024-12-10 00:16:40.518537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.182 [2024-12-10 00:16:40.528066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.182 [2024-12-10 00:16:40.528087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.528098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.536908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.536929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.536940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.545908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.545930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.545941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.555688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.555711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.555722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.567273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.567300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.567310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.576042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.576064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.586535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.586558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.586568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.595338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.595360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.595370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.604452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.604484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.612713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.612734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.612744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.622492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.622514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.622524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.631409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.631430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.631441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.640313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.640334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.640345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.183 [2024-12-10 00:16:40.649521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.183 [2024-12-10 00:16:40.649542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.183 [2024-12-10 00:16:40.649552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.658324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.658345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.658355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.667197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.667219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.667229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.676192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.676213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.676223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.684139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.684161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.694995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.695018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.695028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.705907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.705929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.705939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.714274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.714302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.714312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.726336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.726358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.726372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.737907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.737929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.737939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.746415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.443 [2024-12-10 00:16:40.746436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.443 [2024-12-10 00:16:40.746446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.443 [2024-12-10 00:16:40.757919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.757941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.757951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.768309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.768341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.778493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.778513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.778523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.788072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.788094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.788104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.798496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.798518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.798528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.808494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.808516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.808527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.816437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.816462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.827708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.827730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.827740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.838967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.838989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.838999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.848883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.848904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.848915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.858170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.858190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.858201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.867675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.867696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.867706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.876546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.876567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.876577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.884574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.884604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.893387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.893408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.893418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.905788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.905809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.905820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.444 [2024-12-10 00:16:40.913795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.444 [2024-12-10 00:16:40.913816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.444 [2024-12-10 00:16:40.913831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.925361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.925382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.925392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.936915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.936935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.936945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.944970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.944990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.945001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.954764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.954785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.954796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.966034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.966054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.966064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.977173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.977194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.977205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.985704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.985730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.985740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:40.995626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:40.995646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:40.995657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.004229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.004251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.004262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.014183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.014204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.014214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.021508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.021529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.021539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.031388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.031409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.031420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.042305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.042333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.042343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.053613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.053635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.053645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.061802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.061828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.061839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.072470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.072491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.072501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.083051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.083070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.083081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.093185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.093206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.093217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.704 [2024-12-10 00:16:41.101212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.704 [2024-12-10 00:16:41.101233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.704 [2024-12-10 00:16:41.101243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.113154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.113179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.113189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.121206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.121226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.121236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.131363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.131384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.143113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.143136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.143146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.151790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.151829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.159876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.159897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.159907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.705 [2024-12-10 00:16:41.171337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.705 [2024-12-10 00:16:41.171358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.705 [2024-12-10 00:16:41.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.965 [2024-12-10 00:16:41.179583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.965 [2024-12-10 00:16:41.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.965 [2024-12-10 00:16:41.179615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.965 [2024-12-10 00:16:41.191059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.191081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.191091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.201154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.201175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.201186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.209276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.209298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.209308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.221602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.221624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.221635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.232677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.232699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.232709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.241058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.241083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.241093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.252601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.252623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.252634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.261113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.261133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.261144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.272795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.272817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.272831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.284811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.284848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.295998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.296019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.296030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.304339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.304360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.304370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.316554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.316576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.316586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.328541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.328562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.328572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.339913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.339935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.339945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.348753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.348784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 [2024-12-10 00:16:41.359088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.359109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.359120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 25733.00 IOPS, 100.52 MiB/s [2024-12-09T23:16:41.439Z] [2024-12-10 00:16:41.369696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c59a40) 00:34:56.966 [2024-12-10 00:16:41.369718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.966 [2024-12-10 00:16:41.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.966 00:34:56.966 Latency(us) 00:34:56.966 [2024-12-09T23:16:41.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.966 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:56.966 nvme0n1 : 2.01 25729.06 100.50 0.00 0.00 4969.89 2254.44 16882.07 00:34:56.966 [2024-12-09T23:16:41.439Z] =================================================================================================================== 00:34:56.966 [2024-12-09T23:16:41.439Z] Total : 25729.06 100.50 0.00 0.00 4969.89 2254.44 16882.07 00:34:56.966 { 00:34:56.966 "results": [ 00:34:56.966 { 00:34:56.966 "job": "nvme0n1", 00:34:56.966 "core_mask": "0x2", 00:34:56.966 "workload": "randread", 00:34:56.966 "status": "finished", 00:34:56.966 "queue_depth": 128, 00:34:56.966 "io_size": 4096, 00:34:56.966 "runtime": 2.005281, 00:34:56.966 "iops": 25729.062410704533, 00:34:56.966 "mibps": 100.50415004181458, 00:34:56.966 "io_failed": 0, 00:34:56.966 "io_timeout": 0, 00:34:56.966 "avg_latency_us": 4969.886498306005, 00:34:56.966 "min_latency_us": 2254.4384, 00:34:56.966 "max_latency_us": 16882.0736 00:34:56.966 } 00:34:56.966 ], 00:34:56.966 "core_count": 1 00:34:56.966 } 00:34:56.966 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:56.966 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:56.966 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:56.966 | .driver_specific 00:34:56.966 | .nvme_error 00:34:56.966 | .status_code 00:34:56.966 | .command_transient_transport_error' 00:34:56.966 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 582541 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 582541 ']' 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 582541 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582541 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582541' 00:34:57.226 killing process with pid 582541 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 582541 00:34:57.226 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.226 00:34:57.226 Latency(us) 00:34:57.226 [2024-12-09T23:16:41.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.226 [2024-12-09T23:16:41.699Z] =================================================================================================================== 00:34:57.226 [2024-12-09T23:16:41.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.226 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 582541 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=583077 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 583077 /var/tmp/bperf.sock 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 583077 ']' 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.486 00:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.486 [2024-12-10 00:16:41.858245] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:34:57.486 [2024-12-10 00:16:41.858299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid583077 ] 00:34:57.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.486 Zero copy mechanism will not be used. 00:34:57.486 [2024-12-10 00:16:41.946677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.746 [2024-12-10 00:16:41.984340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.746 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.746 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:57.746 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.746 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.006 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.265 nvme0n1 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:58.265 00:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.526 Zero copy mechanism will not be used. 00:34:58.526 Running I/O for 2 seconds... 00:34:58.526 [2024-12-10 00:16:42.796354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.796392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.796406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.802990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.803019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.803032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.811117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.811144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.811155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.818653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.818683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.818695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.823162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.823186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.823198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.827920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.827944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.827955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.833121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.833144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.833155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.838188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.838211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.838223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.843211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.843239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.843250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.848277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.848300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.848311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.853208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.853232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.853243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.858196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.858219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.858229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.863201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.863226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.863237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.868174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.868198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.873124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.873147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.873159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.878074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.878098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.878108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.882983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.883006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.883017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.888047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.888071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.888081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.893005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.893028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.893039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.897986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.898019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.898030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.902964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.902987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.903001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.526 [2024-12-10 00:16:42.907912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.526 [2024-12-10 00:16:42.907934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.526 [2024-12-10 00:16:42.907945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.912872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.912894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.912904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.917836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.917859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.917870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.922726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.922750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.922760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.927650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.927673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.927685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.932568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.932602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.937506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.937530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.937541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.942923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.942947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.942957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.947916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.947943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.947954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.952792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.952816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.952833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.957750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.957773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.957783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.962745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.962768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.962778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.967670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.967693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.967703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.972647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.972670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.972681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.977621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.977644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.977655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.982522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.982545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.982555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.987495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.987518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.987529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.992541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.992563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.992575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.527 [2024-12-10 00:16:42.997645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.527 [2024-12-10 00:16:42.997668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.527 [2024-12-10 00:16:42.997679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.002704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.002729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.002740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.007636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.007659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.007670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.012542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.012565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.012576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.017424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.017446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.017457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.022321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.022344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.022354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.027248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.027270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.027281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.032164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.032187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.032201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.037141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.037163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.037174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.042282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.042305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.042317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.047281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.047304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.047315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.052263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.052287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.052297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.057292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.057315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.057326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.062174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.062197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.062208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.067143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.067170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.067181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.072089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.072112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.072123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.076988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.077014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.077025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.081882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.081905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.081917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.086870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.086893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.086904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.091899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.091921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.091933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.096863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.096886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.096897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.101752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.101774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.101785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.107000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.107023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.107033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.111737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.111764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.111774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.116743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.116766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.121656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.121679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.121690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.126644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.126667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.126678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.131496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.131520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.131531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.136382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.136406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.136416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.141319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.141343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.146450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.146475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.146486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.151519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.151542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.151554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.156747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.156770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.156780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.162038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.162064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.162077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.167273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.789 [2024-12-10 00:16:43.167296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.789 [2024-12-10 00:16:43.167307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.789 [2024-12-10 00:16:43.172409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.172432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.172443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.177617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.177640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.177651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.182748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.182771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.182782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.187908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.187931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.187942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.193064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.193087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.193098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.198257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.198280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.198290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.203459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.203482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.203492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.208630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.208654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.208664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.213838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.213860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.213872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.219012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.219035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.224204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.224226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.229418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.229441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.229452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.234616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.234639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.234649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.239829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.239851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.239862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.245003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.245026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.245037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.250238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.250262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.250275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.255453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.255476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.255487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.790 [2024-12-10 00:16:43.260719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:58.790 [2024-12-10 00:16:43.260742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.790 [2024-12-10 00:16:43.260753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.265971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.265996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.266006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.271336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.271359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.271370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.276641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.276664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.276674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.281876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.281898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.281909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.287109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.287143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.292353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.292377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.292389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.297572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.297600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.297611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.302836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.302861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.302872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.308086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.308109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.313304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.313327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.313338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.318629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.318652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.318663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.323926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.323949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.323960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.329173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.329196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.329207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.334438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.334472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.339724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.339748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.339758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.344963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.344987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.344999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.350182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.350205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.350216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.355327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.355350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.355361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.360473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.360497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.360507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.365718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.365740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.365751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.370954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.370977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.370988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.376124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.376147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.376157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.381388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.381422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.386576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.386598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.386612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.051 [2024-12-10 00:16:43.391806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.051 [2024-12-10 00:16:43.391833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.051 [2024-12-10 00:16:43.391845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.397058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.397081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.397091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.402266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.402288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.402299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.407480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.407503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.407513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.412686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.412709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.417899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.417928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.417939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.423167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.423190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.423200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.428401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.428424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.428435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.433711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.433737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.438938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.438961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.438971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.444164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.444186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.444197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.449385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.449409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.449419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.454626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.454649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.454660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.459910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.459933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.459943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.465184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.465207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.465218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.470473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.470496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.470507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.475665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.475688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.475699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.480928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.480951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.480962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.486185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.486209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.486221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.491181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.491205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.491215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.496205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.496229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.496239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.501255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.501278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.501289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.506228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.506252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.506263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.511200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.511223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.511235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.516129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.516169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.516180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.052 [2024-12-10 00:16:43.521180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.052 [2024-12-10 00:16:43.521207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.052 [2024-12-10 00:16:43.521218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.526152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.526176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.526187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.531154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.531177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.531188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.536206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.536230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.536240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.541136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.541159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.546098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.546122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.546132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.551095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.551118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.551128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.556140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.556164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.314 [2024-12-10 00:16:43.556174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.314 [2024-12-10 00:16:43.561354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.314 [2024-12-10 00:16:43.561378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.561389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.566563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.566586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.566597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.572253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.572277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.572288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.578403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.578426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.578437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.583683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.583706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.583717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.588958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.588981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.588992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.594169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.594193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.594203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.599367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.599390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.599400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.604541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.604564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.604574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.609727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.609750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.609763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.614962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.614986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.614996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.620150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.620173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.620183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.625372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.625395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.625405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.630572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.630596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.630605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.635785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.635809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.635821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.640992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.641016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.641027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.646178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.646201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.646213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.651403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.651427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.651437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.656666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.656696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.656706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.661922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.661944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.661955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.667162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.667186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.667196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.672313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.672337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.672348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.677529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.677553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.677563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.682734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.682757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.682767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.687924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.687947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.687958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.693150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.693173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.693184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.698281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.698304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.698315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.703469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.315 [2024-12-10 00:16:43.703492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.315 [2024-12-10 00:16:43.703503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.315 [2024-12-10 00:16:43.708673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.708697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.713900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.713923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.713933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.719096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.719119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.719130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.724308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.724331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.724341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.729540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.729564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.734791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.734814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.734831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.740066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.740099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.745369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.745393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.745409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.750678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.750701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.755916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.755939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.755951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.761235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.761259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.761270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.766519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.766544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.766555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.771768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.771792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.771803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.777049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.777073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.777085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.316 [2024-12-10 00:16:43.782129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.316 [2024-12-10 00:16:43.782154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.316 [2024-12-10 00:16:43.782165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.787152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.787177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.576 5959.00 IOPS, 744.88 MiB/s [2024-12-09T23:16:44.049Z] [2024-12-10 00:16:43.793711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.793736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.793747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.799060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.799082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.799094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.804357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.804380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.804390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.809613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.809636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.809648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.814939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.814963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.814974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.820107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.820130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.820141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.825218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.825253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.830343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.830367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.830377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.835416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.835439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.835454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.840427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.840451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.840461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.576 [2024-12-10 00:16:43.845388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.576 [2024-12-10 00:16:43.845411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.576 [2024-12-10 00:16:43.845423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.850407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.850431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.850442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.855417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.855451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.860400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.860424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.860435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.865368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.865392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.865402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.870400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.870424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.875393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.875416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.875428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.880336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.880363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.880373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.885590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.885614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.885625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.890874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.890908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.896046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.896070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.896080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.901273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.901297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.901307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.906481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.906504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.906515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.911750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.911773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.911783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.916745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.916769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.916779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.921939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.921973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.926750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.926773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.926785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.931692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.931716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.931728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.936478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.936502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.936512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.941219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.941243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.941253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.945963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.945987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.945998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.950677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.950701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.950711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.955480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.955504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.955515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.960370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.960394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.960404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.965307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.965331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.965345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.970294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.970317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.970328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.975313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.975349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.980334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.577 [2024-12-10 00:16:43.980356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.577 [2024-12-10 00:16:43.980367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.577 [2024-12-10 00:16:43.985296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:43.985320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:43.985330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:43.990341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:43.990364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:43.990375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:43.995361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:43.995384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:43.995395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.000530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.000553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.000564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.005659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.005683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.005693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.010856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.010882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.010893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.016140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.016163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.016173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.021447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.021471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.021482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.026617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.026641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.026651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.031792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.031816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.036875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.036899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.036910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.042459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.042482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.042492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.578 [2024-12-10 00:16:44.047885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.578 [2024-12-10 00:16:44.047908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.578 [2024-12-10 00:16:44.047920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.050721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.050744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.050755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.055688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.055722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.060806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.060835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.060846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.066020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.066044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.066055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.071177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.071200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.071210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.076661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.076684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.076695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.082549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.082572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.082583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.087563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.087587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.087598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.092804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.092832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.092844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.097997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.098020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.098034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.103106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.103129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.103140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.108212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.108234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.108245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.113292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.113315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.113326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.118633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.118657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.118666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.123806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.123834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.839 [2024-12-10 00:16:44.123844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.839 [2024-12-10 00:16:44.128906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.839 [2024-12-10 00:16:44.128929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.134228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.134251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.134261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.139403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.139426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.139436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.144726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.144749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.149969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.149992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.150003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.155041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.155063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.155074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.160133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.160156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.160166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.165271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.165294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.165305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.170439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.170461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.170472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.175750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.175783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.181097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.181120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.181131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.186537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.186560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.186574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.191926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.191948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.191958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.197203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.197225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.197236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.202345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.202368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.202379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.207552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.207575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.207585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.212629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.212651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.217750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.217773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.222856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.222879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.222889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.228036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.228059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.233158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.233185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.233196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.238365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.238387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.238398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.243657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.243681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.243691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.248789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.248812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.248829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.253974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.253997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.254008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.259054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.259087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.264141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.264164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.840 [2024-12-10 00:16:44.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.840 [2024-12-10 00:16:44.269961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.840 [2024-12-10 00:16:44.269985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.269995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.275437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.275460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.275470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.280691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.280714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.280725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.285778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.285802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.285813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.290989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.291012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.291023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.296176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.296200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.296211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.301349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.301372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.301383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.841 [2024-12-10 00:16:44.306443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:34:59.841 [2024-12-10 00:16:44.306466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.841 [2024-12-10 00:16:44.306477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.102 [2024-12-10 00:16:44.311655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.102 [2024-12-10 00:16:44.311681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.102 [2024-12-10 00:16:44.311692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.102 [2024-12-10 00:16:44.316919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.102 [2024-12-10 00:16:44.316943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.102 [2024-12-10 00:16:44.316954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.102 [2024-12-10 00:16:44.322144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.102 [2024-12-10 00:16:44.322168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.102 [2024-12-10 00:16:44.322185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.102 [2024-12-10 00:16:44.327293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.102 [2024-12-10 00:16:44.327317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.102 [2024-12-10 00:16:44.327327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.332440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.332464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.332475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.337697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.337720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.337731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.343070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.343093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.343103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.348208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.348242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.353436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.353458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.353469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.358832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.358865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.363943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.363967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.363977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.369556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.369583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.369593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.375670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.375693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.375704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.383255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.383279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.383290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.389972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.389996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.390007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.396187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.396211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.396221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.399755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.399777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.399787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.404018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.404042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.404053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.409258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.409281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.409292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.414084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.414107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.414118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.419203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.419226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.419237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.424299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.424322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.424332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.429416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.429440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.429450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.434392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.434415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.434426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.439502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.439525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.439535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.444851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.444874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.103 [2024-12-10 00:16:44.444883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.103 [2024-12-10 00:16:44.450107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.103 [2024-12-10 00:16:44.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.450140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.455335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.455358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.455369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.460409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.460436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.460447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.465511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.465544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.470639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.470661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.470672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.475694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.475717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.475728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.480698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.480721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.480731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.485738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.485761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.485771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.490795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.490818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.490835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.495915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.495938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.495948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.501052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.501074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.501085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.506220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.506243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.506254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.511370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.511392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.511403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.516415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.516438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.521531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.521554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.521564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.526700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.526724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.526735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.531584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.531608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.531619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.536505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.536528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.536539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.541486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.541509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.541520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.546423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.546446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.546460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.551384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.551406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.551417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.556740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.556764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.556774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.561774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.561798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.566942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.566966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.566976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.104 [2024-12-10 00:16:44.572553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.104 [2024-12-10 00:16:44.572577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.104 [2024-12-10 00:16:44.572588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.577802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.577831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.582913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.582937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.582948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.587992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.588015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.588026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.593176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.593203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.593214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.598401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.598424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.598434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.603437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.603459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.603470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.608636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.608659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.608669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.613748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.613772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.613783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.618853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.618877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.618887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.624191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.624214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.629489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.629512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.629522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.634640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.634663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.634674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.639719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.639742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.639752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.644803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.644832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.644842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.649968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.649991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.650002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.655076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.655110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.660239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.660273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.665281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.665305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.665316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.670410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.670432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.670443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.675716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.675740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.675750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.680867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.680890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.680904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.686122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.686146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.686156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.365 [2024-12-10 00:16:44.691169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.365 [2024-12-10 00:16:44.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.365 [2024-12-10 00:16:44.691202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.696227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.696249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.696260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.701329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.701351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.701362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.706407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.706430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.706440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.711638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.711660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.711670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.716869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.716892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.716902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.722248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.722270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.722280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.727592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.727625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.732731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.732755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.732765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.737976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.737999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.738009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.743059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.743091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.748442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.748465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.748475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.753571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.753594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.758419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.758442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.758453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.763265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.763287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.763298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.768379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.768417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.773609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.773631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.773641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.778750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.778773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.778783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.783855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.783878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.783888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.366 [2024-12-10 00:16:44.789065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11c8530) 00:35:00.366 [2024-12-10 00:16:44.789088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.366 [2024-12-10 00:16:44.789098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.366 5987.00 IOPS, 748.38 MiB/s 00:35:00.366 Latency(us) 00:35:00.366 [2024-12-09T23:16:44.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.366 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:00.366 nvme0n1 : 2.00 5988.25 748.53 0.00 0.00 2669.27 645.53 13631.49 00:35:00.366 [2024-12-09T23:16:44.839Z] =================================================================================================================== 00:35:00.366 [2024-12-09T23:16:44.839Z] Total : 5988.25 748.53 0.00 0.00 2669.27 645.53 13631.49 00:35:00.366 { 00:35:00.366 "results": [ 00:35:00.366 { 00:35:00.366 "job": "nvme0n1", 00:35:00.366 "core_mask": "0x2", 00:35:00.366 "workload": "randread", 00:35:00.366 "status": "finished", 00:35:00.366 "queue_depth": 16, 00:35:00.366 "io_size": 131072, 00:35:00.366 "runtime": 2.002254, 00:35:00.366 "iops": 5988.25124085156, 00:35:00.366 "mibps": 748.531405106445, 00:35:00.366 "io_failed": 0, 00:35:00.366 "io_timeout": 0, 00:35:00.366 "avg_latency_us": 2669.273217814846, 00:35:00.366 "min_latency_us": 645.5296, 00:35:00.366 "max_latency_us": 13631.488 00:35:00.366 } 00:35:00.366 ], 00:35:00.366 "core_count": 1 00:35:00.366 } 00:35:00.366 00:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:00.366 00:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:00.366 00:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:00.366 | .driver_specific 00:35:00.366 | .nvme_error 00:35:00.366 | .status_code 00:35:00.366 | .command_transient_transport_error' 00:35:00.366 00:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 583077 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 583077 ']' 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 583077 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.626 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583077 00:35:00.627 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:00.627 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:00.627 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583077' 00:35:00.627 killing process with pid 583077 00:35:00.627 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 583077 00:35:00.627 Received shutdown signal, test time was about 2.000000 seconds 00:35:00.627 00:35:00.627 Latency(us) 00:35:00.627 [2024-12-09T23:16:45.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.627 [2024-12-09T23:16:45.100Z] =================================================================================================================== 00:35:00.627 [2024-12-09T23:16:45.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:00.627 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 583077 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=583684 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 583684 /var/tmp/bperf.sock 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 583684 ']' 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.886 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:00.886 [2024-12-10 00:16:45.298227] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:00.886 [2024-12-10 00:16:45.298279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid583684 ] 00:35:01.146 [2024-12-10 00:16:45.386965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.146 [2024-12-10 00:16:45.426453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.146 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.146 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:01.146 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.146 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.405 00:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.664 nvme0n1 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:01.664 00:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.925 Running I/O for 2 seconds... 00:35:01.925 [2024-12-10 00:16:46.180443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.180612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.189546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.189704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.189728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.198656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.198814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.198839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.207863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.208021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.217002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.217159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.217178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.226289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.226445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.226465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.235400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.235554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.235575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.244551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.244724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.253738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.253898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.253918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.262857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.263011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.263030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.271958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.272111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.272131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.281141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.281296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.281316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.290237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.290415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.299387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.299542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.925 [2024-12-10 00:16:46.299561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.925 [2024-12-10 00:16:46.308511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.925 [2024-12-10 00:16:46.308665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.308684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.317625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.317777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.317796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.326840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.326995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.327014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.335944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.336100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.345073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.345227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.345246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.354200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.354372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.363295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.363450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.363468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.372438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.372588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.372608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.381534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.381687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.381706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:01.926 [2024-12-10 00:16:46.390626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:01.926 [2024-12-10 00:16:46.390781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:01.926 [2024-12-10 00:16:46.390800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.187 [2024-12-10 00:16:46.399870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.187 [2024-12-10 00:16:46.400037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.187 [2024-12-10 00:16:46.400055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.187 [2024-12-10 00:16:46.408979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.187 [2024-12-10 00:16:46.409134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.187 [2024-12-10 00:16:46.409153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.187 [2024-12-10 00:16:46.418101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.187 [2024-12-10 00:16:46.418254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.187 [2024-12-10 00:16:46.418273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.187 [2024-12-10 00:16:46.427223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.187 [2024-12-10 00:16:46.427380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.187 [2024-12-10 00:16:46.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.436338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.436491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.436510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.445664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.445818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.445846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.454932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.455087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.455106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.464097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.464250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.464269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.473233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.473386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.473405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.482329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.482481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.482499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.491448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.491603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.491622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.500580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.500733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.500752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.509669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.509845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.518813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.518973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.518992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.527911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.528069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.528087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.537033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.537186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.537204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.546186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.546341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.546362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.555298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.555450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.555468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.564400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.564554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.573515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.573666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.573685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.582661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.582813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.582836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.591792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.591950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.591970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.600869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.601021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.601040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.609946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.610101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.610119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.619091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.619238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.619256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.628145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.628299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.628318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.637204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.637354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.637373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.646349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.646505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.646524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.188 [2024-12-10 00:16:46.655465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.188 [2024-12-10 00:16:46.655618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.188 [2024-12-10 00:16:46.655637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.664660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.664817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.664842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.673764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.673924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.682883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.683037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.692031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.692183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.692202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.701329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.701482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.701501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.710448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.710620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.719565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.719720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.719739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.728673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.728830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.728849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.738023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.738180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.738200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.747400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.747556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.747585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.756579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.756733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.756752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.765716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.765873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.765895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.774844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.775004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.775023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.783991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.449 [2024-12-10 00:16:46.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.449 [2024-12-10 00:16:46.784162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.449 [2024-12-10 00:16:46.793135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.793288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.793306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.802244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.802396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.811382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.811535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.811554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.820496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.820648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.820668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.829596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.829749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.829769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.838749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.838908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.838927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.847863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.848018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.848037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.856965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.857119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.857140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.865997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.866151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.866170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.875092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.875246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.875264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.884247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.884402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.884420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.893355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.893526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.902463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.902616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.902635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.911614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.911765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.911784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.450 [2024-12-10 00:16:46.920761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.450 [2024-12-10 00:16:46.920920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.450 [2024-12-10 00:16:46.920942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.929969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.930142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.939100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.939252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.948211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.948362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.948381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.957523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.957675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.957694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.966644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.966795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.966813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.975747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.975909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.975928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.984902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.985053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:46.994006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:46.994161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:46.994180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.003099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.003257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.003276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.012269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.012421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.012440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.021341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.021495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.021514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.030490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.030643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.030662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.039584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.039738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.039756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.048702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.048882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.057837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.058010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.066964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.067117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.067136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.076034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.076191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.076210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.085200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.085351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.085370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.094288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.094438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.094457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.103404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.103556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.103575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.112523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.112676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.112694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.121630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.121782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.121800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.130771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.130931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.130950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.139876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.140028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.140047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.149196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.149350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.158346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.158499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.158522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 [2024-12-10 00:16:47.167457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.168111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.168132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.711 27746.00 IOPS, 108.38 MiB/s [2024-12-09T23:16:47.184Z] [2024-12-10 00:16:47.176629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.711 [2024-12-10 00:16:47.176782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.711 [2024-12-10 00:16:47.176802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.185814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.185973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.185992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.194920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.195075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.195096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.204065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.204218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.204237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.213360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.213512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.213530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.222475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.222630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.222649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.231616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.231768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.231786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.240721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.240892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.240911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.249858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.250011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.250030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.258998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.259151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.259170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.268082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.268236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.268255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.277240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.277411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.286310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.286464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.286483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.295429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.295582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.295600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.304590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.304743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.313693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.313844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.313863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.322781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.322943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.972 [2024-12-10 00:16:47.322962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.972 [2024-12-10 00:16:47.331944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.972 [2024-12-10 00:16:47.332094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.332113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.341040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.341193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.350366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.350519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.350538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.359473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.359625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.359644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.368590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.377745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.377903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.377922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.386849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.387001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.387020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.395950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.396103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.405059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.405212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.405231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.414146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.414299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.414317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.423269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.423417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.423435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.432342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.432497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.432515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:02.973 [2024-12-10 00:16:47.441423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:02.973 [2024-12-10 00:16:47.441576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.973 [2024-12-10 00:16:47.441595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.450639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.450813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.459938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.460095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.460115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.469124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.469275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.469294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.478296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.478449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.478471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.487381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.487533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.487551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.496496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.233 [2024-12-10 00:16:47.496647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.233 [2024-12-10 00:16:47.496666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.233 [2024-12-10 00:16:47.505598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.505752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.505771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.514687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.514841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.523845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.523998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.524017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.532928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.533100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.542067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.542219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.542238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.551189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.560286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.560442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.560461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.569401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.569551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.569570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.578502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.578675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.587590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.587757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.596714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.596892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.605848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.606002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.606020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.614926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.615078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.615097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.624052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.624205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.633136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.633287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.633306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.642247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.642397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.642417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.651363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.651516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.651543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.660435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.660590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.660609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.669578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.669729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.669747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.678638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.678791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.678810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.687744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.687902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.687922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.696860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.697012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.697031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.234 [2024-12-10 00:16:47.705978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.234 [2024-12-10 00:16:47.706130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.234 [2024-12-10 00:16:47.706149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.715146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.715301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.715324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.724433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.724586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.733511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.733661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.733679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.742638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.742790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.742809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.751743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.751919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.751938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.760863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.761017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.761036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.770005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.770158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.770176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.779136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.779287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.779308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.788232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.788385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.788404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.797342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.797497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.797515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.806447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.806619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.815567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.815717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.815736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.824781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.824956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.824976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.833905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.834056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.834074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.843036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.843189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.852129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.852282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.852301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.495 [2024-12-10 00:16:47.861278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.495 [2024-12-10 00:16:47.861432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.495 [2024-12-10 00:16:47.861451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.870408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.870558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.870577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.879554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.879708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.879726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.888866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.889020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.889039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.897969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.898128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.898148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.907056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.907209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.907228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.916189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.916341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.916361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.925284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.925438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.925457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.934372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.934523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.934542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.943487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.943640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.943658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.952590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.952745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.952767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.496 [2024-12-10 00:16:47.961706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.496 [2024-12-10 00:16:47.961864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.496 [2024-12-10 00:16:47.961884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:47.970844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:47.970996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:47.971015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:47.980139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:47.980292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:47.980311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:47.989271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:47.989424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:47.989442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:47.998420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:47.998572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:47.998591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.007489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.007635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.007653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.016598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.016749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.016768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.025673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.025831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.025850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.034800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.034981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.043898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.044050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.044069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.053007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.053160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.053179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.062132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.062283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.062303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.071226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.071378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.071396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.080357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.080512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.080530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.089458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.089613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.089631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.098586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.098738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.107688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.107839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.107858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.116797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.116956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.116975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.125856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.126009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.126028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.135008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.757 [2024-12-10 00:16:48.135159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.757 [2024-12-10 00:16:48.135178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.757 [2024-12-10 00:16:48.144295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.758 [2024-12-10 00:16:48.144449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.758 [2024-12-10 00:16:48.144479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.758 [2024-12-10 00:16:48.153402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.758 [2024-12-10 00:16:48.153555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.758 [2024-12-10 00:16:48.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.758 [2024-12-10 00:16:48.162514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.758 [2024-12-10 00:16:48.162668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.758 [2024-12-10 00:16:48.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.758 27874.00 IOPS, 108.88 MiB/s [2024-12-09T23:16:48.231Z] [2024-12-10 00:16:48.171612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x530e70) with pdu=0x200016efe2e8 00:35:03.758 [2024-12-10 00:16:48.171764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.758 [2024-12-10 00:16:48.171782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:03.758 00:35:03.758 Latency(us) 00:35:03.758 [2024-12-09T23:16:48.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.758 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.758 nvme0n1 : 2.01 27877.20 108.90 0.00 0.00 4583.77 3355.44 12635.34 00:35:03.758 [2024-12-09T23:16:48.231Z] =================================================================================================================== 00:35:03.758 [2024-12-09T23:16:48.231Z] Total : 27877.20 108.90 0.00 0.00 4583.77 3355.44 12635.34 00:35:03.758 { 00:35:03.758 "results": [ 00:35:03.758 { 00:35:03.758 "job": "nvme0n1", 00:35:03.758 "core_mask": "0x2", 00:35:03.758 "workload": "randwrite", 00:35:03.758 "status": "finished", 00:35:03.758 "queue_depth": 128, 00:35:03.758 "io_size": 4096, 00:35:03.758 "runtime": 2.005797, 00:35:03.758 "iops": 27877.197941765793, 00:35:03.758 "mibps": 108.89530446002263, 00:35:03.758 "io_failed": 0, 00:35:03.758 "io_timeout": 0, 00:35:03.758 "avg_latency_us": 4583.770631947922, 00:35:03.758 "min_latency_us": 3355.4432, 00:35:03.758 "max_latency_us": 12635.3408 00:35:03.758 } 00:35:03.758 ], 00:35:03.758 "core_count": 1 00:35:03.758 } 00:35:03.758 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:03.758 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:03.758 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:03.758 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:03.758 | .driver_specific 00:35:03.758 | .nvme_error 00:35:03.758 | .status_code 00:35:03.758 | .command_transient_transport_error' 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 583684 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 583684 ']' 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 583684 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583684 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583684' 00:35:04.018 killing process with pid 583684 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 583684 00:35:04.018 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.018 00:35:04.018 Latency(us) 00:35:04.018 [2024-12-09T23:16:48.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.018 [2024-12-09T23:16:48.491Z] =================================================================================================================== 00:35:04.018 [2024-12-09T23:16:48.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.018 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 583684 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=584285 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 584285 /var/tmp/bperf.sock 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 584285 ']' 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.278 00:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.278 [2024-12-10 00:16:48.662000] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:04.278 [2024-12-10 00:16:48.662051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584285 ] 00:35:04.278 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.278 Zero copy mechanism will not be used. 00:35:04.538 [2024-12-10 00:16:48.751889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.538 [2024-12-10 00:16:48.792229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.106 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.106 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:05.106 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.106 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.366 00:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.626 nvme0n1 00:35:05.626 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:05.626 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.626 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.886 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.886 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:05.886 00:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.886 Zero copy mechanism will not be used. 00:35:05.886 Running I/O for 2 seconds... 00:35:05.886 [2024-12-10 00:16:50.203405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.203528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.210479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.210547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.210571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.215328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.215445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.220929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.220998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.221020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.226352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.226576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.232452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.232605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.239122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.239285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.239307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.245416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.245574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.245594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.251889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.252010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.252031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.258295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.258452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.258476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.265173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.265330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.265367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.271607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.271765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.271785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.277294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.277352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.277372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.281795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.281868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.281888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.286202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.286307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.286327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.290831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.290990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.291011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.295540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.295612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.295632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.300690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.300749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.305869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.305927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.311585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.311681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.311701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.316767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.316867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.316887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.321669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.321726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.321746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.326416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.326512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.326531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.331046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.331106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.331126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.335933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.336008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.340602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.340672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.340692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.345494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.345554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.345574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.350326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.350423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.350443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:05.886 [2024-12-10 00:16:50.354998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:05.886 [2024-12-10 00:16:50.355055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.886 [2024-12-10 00:16:50.355075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.359507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.359655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.359677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.364186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.364264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.364285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.369287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.369344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.369364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.374437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.374494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.374513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.379666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.379726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.379746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.385178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.385240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.385260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.390109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.390170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.390198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.395048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.395184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.395204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.400737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.400793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.400813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.406284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.406386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.406406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.411489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.411588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.411608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.417087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.417147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.417167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.422006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.422095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.422114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.427018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.427075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.427094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.431748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.431808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.431834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.436810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.436880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.436900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.442273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.442329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.442348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.447519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.447578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.447598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.453394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.453487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.453506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.458522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.458577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.458597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.463466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.463535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.463555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.468150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.468249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.468269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.473282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.473337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.473357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.478435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.478492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.478511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.483767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.483888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.483908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.489152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.489255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.489274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.495113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.495194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.495214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.501812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.501897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.501917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.507124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.507204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.507225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.512017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.512079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.512099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.516773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.516850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.516870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.521616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.521673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.521693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.148 [2024-12-10 00:16:50.526595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.148 [2024-12-10 00:16:50.526663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.148 [2024-12-10 00:16:50.526686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.531541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.531661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.531682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.536165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.536227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.536247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.540643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.540697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.540717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.545127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.545182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.545202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.549499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.549591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.553917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.553985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.554005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.558348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.558418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.558438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.562685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.562748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.562767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.567098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.567160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.567180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.571524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.571595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.571615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.575958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.576010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.580701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.580790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.580810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.585763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.585821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.585846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.591271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.591412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.591431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.596574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.596635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.596655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.602453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.602550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.607660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.607735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.607754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.613099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.613164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.613184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.149 [2024-12-10 00:16:50.618073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.149 [2024-12-10 00:16:50.618129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.149 [2024-12-10 00:16:50.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.623080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.623139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.623158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.628418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.628586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.628608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.633436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.633495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.633515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.638975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.639069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.639089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.643712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.643780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.643800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.649079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.649175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.649195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.654265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.654347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.654370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.659162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.659275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.663912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.663967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.663986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.668500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.668606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.668626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.672998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.673053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.673073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.677714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.677833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.677852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.682494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.682674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.687357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.687422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.687442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.692153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.692264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.692283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.697000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.697083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.697103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.701863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.701920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.701939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.706674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.706746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.706766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.711450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.711509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.411 [2024-12-10 00:16:50.711529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.411 [2024-12-10 00:16:50.716252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.411 [2024-12-10 00:16:50.716363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.716383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.721014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.721139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.721159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.725724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.725807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.725833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.730436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.730499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.730519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.735116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.735182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.735202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.739867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.739936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.739955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.744780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.744846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.744866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.749554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.749622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.749642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.754133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.754240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.754259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.758626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.758684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.758704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.763316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.763440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.763459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.768357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.768414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.768433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.773868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.773931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.773951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.779158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.779235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.779258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.784062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.784168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.789236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.789328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.789347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.793966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.794031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.794051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.798714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.798794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.803217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.803296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.807731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.807915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.812549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.812604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.812624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.817740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.817795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.823172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.823230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.823250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.828049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.828107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.828127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.832638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.832721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.832741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.837340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.837395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.837414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.841961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.842056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.842075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.846462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.846568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.412 [2024-12-10 00:16:50.846587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.412 [2024-12-10 00:16:50.850912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.412 [2024-12-10 00:16:50.850969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.850999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.855661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.855742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.855761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.860308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.860384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.860404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.864706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.864790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.864810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.869067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.869125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.869145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.873433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.873503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.873523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.877809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.877880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.877900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.413 [2024-12-10 00:16:50.882202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.413 [2024-12-10 00:16:50.882259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.413 [2024-12-10 00:16:50.882279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.886536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.886599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.886619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.890963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.891016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.891036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.895653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.895739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.895760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.900460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.900530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.900553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.905374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.905465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.905485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.910067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.910132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.910153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.914724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.914781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.914800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.919461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.919536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.919556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.924076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.924138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.924157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.928725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.928784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.928803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.933365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.933438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.933458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.938103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.938165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.938184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.942993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.943065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.943088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.947619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.947732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.947751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.952403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.952463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.952483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.957021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.957101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.957120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.961693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.961769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.961789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.966402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.966517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.966536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.971582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.971645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.971665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.976461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.976527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.976547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.981225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.981281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.981301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.985934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.985990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.986010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.990541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.990599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.990618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.994866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.994940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.994960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:50.999246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.675 [2024-12-10 00:16:50.999303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.675 [2024-12-10 00:16:50.999323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.675 [2024-12-10 00:16:51.003653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.003719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.003739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.007994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.008055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.008074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.012391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.012448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.012467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.017122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.017180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.017199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.021887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.021996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.022015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.027549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.027634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.032768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.032836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.032856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.037707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.037783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.037803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.042297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.042363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.042383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.047054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.047125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.047144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.051726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.051787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.051807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.056607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.056681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.056701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.061394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.061448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.061467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.066156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.066215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.066238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.070839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.070900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.070920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.075516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.075575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.080372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.080472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.080492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.085335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.085434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.085454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.090900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.090963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.090982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.095963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.096045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.096064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.100861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.100917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.100936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.105705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.105778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.105797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.110530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.110610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.110630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.115299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.115381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.115401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.120197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.120293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.124888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.124975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.124995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.129452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.129509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.129528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.676 [2024-12-10 00:16:51.134453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.676 [2024-12-10 00:16:51.134532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.676 [2024-12-10 00:16:51.134552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.677 [2024-12-10 00:16:51.139094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.677 [2024-12-10 00:16:51.139165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.677 [2024-12-10 00:16:51.139185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.677 [2024-12-10 00:16:51.143712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.677 [2024-12-10 00:16:51.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.677 [2024-12-10 00:16:51.143801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.148159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.148216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.148236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.152569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.152628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.152647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.156993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.157050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.157070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.161578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.161644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.161664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.166119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.166209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.170635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.170711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.175372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.175455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.175475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.180203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.180261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.938 [2024-12-10 00:16:51.185028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.938 [2024-12-10 00:16:51.185175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.938 [2024-12-10 00:16:51.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.189973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.190051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.190074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.194786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.194869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.194889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.199746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.199812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.199838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 6291.00 IOPS, 786.38 MiB/s [2024-12-09T23:16:51.412Z] [2024-12-10 00:16:51.205726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.205794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.205813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.210902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.210984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.215886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.216021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.220736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.220790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.220810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.225505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.225571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.225590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.230440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.230545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.230564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.235304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.235366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.239971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.240063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.240083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.244769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.244844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.244864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.249574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.249633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.249652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.254261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.254320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.254340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.258875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.258949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.258968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.263516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.263572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.263591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.268258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.268327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.268347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.272952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.273013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.273033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.277541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.277610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.277630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.281927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.282003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.282023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.286308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.286368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.286387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.290687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.290759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.290778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.294986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.295056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.295075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.299332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.299403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.304084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.304184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.304204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.308794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.308855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.308874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.313775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.313854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.313877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.318929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.318986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.319006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.323962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.324021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.324040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.329328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.329457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.329477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.334650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.334709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.334728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.339806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.339875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.339894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.345024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.345091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.345110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.350499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.350555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.350574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.355632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.355691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.355710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.361691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.361750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.361769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.366775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.366898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.366917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.371851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.939 [2024-12-10 00:16:51.371940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.939 [2024-12-10 00:16:51.371960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.939 [2024-12-10 00:16:51.377387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.377580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.377601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.382533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.382621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.382640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.387889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.387963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.393372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.393430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.393449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.398646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.398712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.398731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.404011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.404071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:06.940 [2024-12-10 00:16:51.409507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:06.940 [2024-12-10 00:16:51.409642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.940 [2024-12-10 00:16:51.409661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.414681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.414759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.414779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.419819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.420260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.420281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.425745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.425800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.425819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.431146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.431222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.431242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.436383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.436444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.436463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.441795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.441866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.441885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.447136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.200 [2024-12-10 00:16:51.447214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.200 [2024-12-10 00:16:51.447233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.200 [2024-12-10 00:16:51.452539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.452644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.452667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.458522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.458599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.458619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.463859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.464024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.464043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.469990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.470050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.470070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.475380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.475454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.475474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.480591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.480652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.480672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.486718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.486795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.486815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.491974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.492129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.492148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.498126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.498189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.498209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.504318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.504463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.504482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.511735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.511864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.511884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.519524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.519664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.519684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.526117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.526298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.526318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.533570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.533715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.533735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.540748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.540895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.540917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.547854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.547997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.548017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.555039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.555208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.555228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.562793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.562927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.562949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.570602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.570759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.570779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.577749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.577896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.577916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.585302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.585599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.592003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.592321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.592343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.598588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.598837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.598859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.605089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.605342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.611812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.612110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.612131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.618526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.618850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.618872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.626503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.626742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.626767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.633491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.201 [2024-12-10 00:16:51.633688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.201 [2024-12-10 00:16:51.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.201 [2024-12-10 00:16:51.640559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.202 [2024-12-10 00:16:51.640888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.202 [2024-12-10 00:16:51.640909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.202 [2024-12-10 00:16:51.647989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.202 [2024-12-10 00:16:51.648318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.202 [2024-12-10 00:16:51.648339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.202 [2024-12-10 00:16:51.655253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.202 [2024-12-10 00:16:51.655602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.202 [2024-12-10 00:16:51.655624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.202 [2024-12-10 00:16:51.662120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.202 [2024-12-10 00:16:51.662443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.202 [2024-12-10 00:16:51.662465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.202 [2024-12-10 00:16:51.668891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.202 [2024-12-10 00:16:51.669217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.202 [2024-12-10 00:16:51.669239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.676183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.676480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.676503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.683393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.683603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.683625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.690058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.690335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.690357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.697090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.697378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.697401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.704082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.704409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.704430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.711513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.711847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.711869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.718389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.718681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.718703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.725365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.725660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.725681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.732901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.733201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.739561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.739816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.739844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.746170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.753983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.754110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.754131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.761141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.761454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.761475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.768242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.768525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.768547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.775743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.776034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.776057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.782617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.782865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.782887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.788601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.788809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.788838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.795003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.795235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.795258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.801587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.801852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.806536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.806782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.811477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.811697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.811718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.816155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.816379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.816401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.820972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.821204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.821226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.826475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.826791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.832977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.463 [2024-12-10 00:16:51.833198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.463 [2024-12-10 00:16:51.833219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.463 [2024-12-10 00:16:51.838189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.838469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.843181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.843399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.843420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.847708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.847928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.847949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.852283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.852512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.852534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.857435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.857743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.857765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.863378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.863670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.863691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.869574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.869881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.875189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.875420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.875442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.879503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.879751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.883732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.883968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.883990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.887964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.888192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.888213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.892122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.892338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.892359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.896519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.896756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.896778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.902510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.902760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.902782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.908458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.908729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.908750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.914358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.914660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.914681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.920274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.920619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.920641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.926446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.926714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.926736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.464 [2024-12-10 00:16:51.932530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.464 [2024-12-10 00:16:51.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.464 [2024-12-10 00:16:51.932812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.939025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.939359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.939381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.945224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.945541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.945562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.951266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.951590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.951612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.957894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.958196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.958217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.964067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.964378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.964400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.970443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.970743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.976779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.977086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.983354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.983605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.983627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.989633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.989893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.989915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:51.995639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:51.995987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:51.996009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.001904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.002296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.002321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.008148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.008479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.008500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.014347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.014678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.014699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.020633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.020897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.020920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.026861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.027176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.027198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.033224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.033550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.033572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.039435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.039750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.039772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.045840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.045994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.046015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.052208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.052514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.058872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.059149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.066036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.066317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.066339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.072960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.073257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.080491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.080748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.080769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.086565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.086843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.092808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.093055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.093077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.725 [2024-12-10 00:16:52.099057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.725 [2024-12-10 00:16:52.099282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.725 [2024-12-10 00:16:52.099303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.104628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.104898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.104921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.111751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.111973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.111995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.118235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.118476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.118498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.123920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.124163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.129179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.129429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.129451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.134084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.134307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.134329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.138857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.139079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.139101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.143903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.144230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.144252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.150115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.150453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.150476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.155395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.155610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.155632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.160424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.160664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.160689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.165277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.165498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.165520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.169849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.170079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.170101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.174246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.174466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.174488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.179694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.180038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.185620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.185939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.185961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.726 [2024-12-10 00:16:52.191933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.726 [2024-12-10 00:16:52.192228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.726 [2024-12-10 00:16:52.192249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:07.985 [2024-12-10 00:16:52.197897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.985 [2024-12-10 00:16:52.198190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.985 [2024-12-10 00:16:52.198212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:07.985 [2024-12-10 00:16:52.203907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.985 [2024-12-10 00:16:52.204236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.985 [2024-12-10 00:16:52.204257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:07.985 5787.50 IOPS, 723.44 MiB/s [2024-12-09T23:16:52.458Z] [2024-12-10 00:16:52.210532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5311b0) with pdu=0x200016eff3c8 00:35:07.985 [2024-12-10 00:16:52.210731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.985 [2024-12-10 00:16:52.210751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:07.985 00:35:07.985 Latency(us) 00:35:07.985 [2024-12-09T23:16:52.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.985 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:07.985 nvme0n1 : 2.00 5783.95 722.99 0.00 0.00 2761.32 1939.87 7969.18 00:35:07.985 [2024-12-09T23:16:52.458Z] =================================================================================================================== 00:35:07.985 [2024-12-09T23:16:52.458Z] Total : 5783.95 722.99 0.00 0.00 2761.32 1939.87 7969.18 00:35:07.985 { 00:35:07.985 "results": [ 00:35:07.985 { 00:35:07.985 "job": "nvme0n1", 00:35:07.985 "core_mask": "0x2", 00:35:07.985 "workload": "randwrite", 00:35:07.985 "status": "finished", 00:35:07.985 "queue_depth": 16, 00:35:07.985 "io_size": 131072, 00:35:07.985 "runtime": 2.004684, 00:35:07.985 "iops": 5783.953979779357, 00:35:07.985 "mibps": 722.9942474724196, 00:35:07.985 "io_failed": 0, 00:35:07.985 "io_timeout": 0, 00:35:07.985 "avg_latency_us": 2761.3213496852095, 00:35:07.985 "min_latency_us": 1939.8656, 00:35:07.985 "max_latency_us": 7969.1776 00:35:07.985 } 00:35:07.985 ], 00:35:07.985 "core_count": 1 00:35:07.985 } 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:07.985 | .driver_specific 00:35:07.985 | .nvme_error 00:35:07.985 | .status_code 00:35:07.985 | .command_transient_transport_error' 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 584285 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 584285 ']' 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 584285 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.985 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584285 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584285' 00:35:08.245 killing process with pid 584285 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 584285 00:35:08.245 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.245 00:35:08.245 Latency(us) 00:35:08.245 [2024-12-09T23:16:52.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.245 [2024-12-09T23:16:52.718Z] =================================================================================================================== 00:35:08.245 [2024-12-09T23:16:52.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 584285 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 582409 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 582409 ']' 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 582409 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.245 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582409 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582409' 00:35:08.504 killing process with pid 582409 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 582409 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 582409 00:35:08.504 00:35:08.504 real 0m15.493s 00:35:08.504 user 0m28.784s 00:35:08.504 sys 0m5.242s 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.504 ************************************ 00:35:08.504 END TEST nvmf_digest_error 00:35:08.504 ************************************ 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.504 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.504 rmmod nvme_tcp 00:35:08.504 rmmod nvme_fabrics 00:35:08.504 rmmod nvme_keyring 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 582409 ']' 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 582409 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 582409 ']' 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 582409 00:35:08.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (582409) - No such process 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 582409 is not found' 00:35:08.763 Process with pid 582409 is not found 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.763 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.764 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.764 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.764 00:16:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.671 00:35:10.671 real 0m41.975s 00:35:10.671 user 1m2.403s 00:35:10.671 sys 0m16.244s 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.671 ************************************ 00:35:10.671 END TEST nvmf_digest 00:35:10.671 ************************************ 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.671 00:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.929 ************************************ 00:35:10.929 START TEST nvmf_bdevperf 00:35:10.929 ************************************ 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:10.929 * Looking for test storage... 00:35:10.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.929 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:10.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.930 --rc genhtml_branch_coverage=1 00:35:10.930 --rc genhtml_function_coverage=1 00:35:10.930 --rc genhtml_legend=1 00:35:10.930 --rc geninfo_all_blocks=1 00:35:10.930 --rc geninfo_unexecuted_blocks=1 00:35:10.930 00:35:10.930 ' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:10.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.930 --rc genhtml_branch_coverage=1 00:35:10.930 --rc genhtml_function_coverage=1 00:35:10.930 --rc genhtml_legend=1 00:35:10.930 --rc geninfo_all_blocks=1 00:35:10.930 --rc geninfo_unexecuted_blocks=1 00:35:10.930 00:35:10.930 ' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:10.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.930 --rc genhtml_branch_coverage=1 00:35:10.930 --rc genhtml_function_coverage=1 00:35:10.930 --rc genhtml_legend=1 00:35:10.930 --rc geninfo_all_blocks=1 00:35:10.930 --rc geninfo_unexecuted_blocks=1 00:35:10.930 00:35:10.930 ' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:10.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.930 --rc genhtml_branch_coverage=1 00:35:10.930 --rc genhtml_function_coverage=1 00:35:10.930 --rc genhtml_legend=1 00:35:10.930 --rc geninfo_all_blocks=1 00:35:10.930 --rc geninfo_unexecuted_blocks=1 00:35:10.930 00:35:10.930 ' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:10.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.930 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:11.190 00:16:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:19.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:19.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:19.335 Found net devices under 0000:af:00.0: cvl_0_0 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:19.335 Found net devices under 0000:af:00.1: cvl_0_1 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.335 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:35:19.336 00:35:19.336 --- 10.0.0.2 ping statistics --- 00:35:19.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.336 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:35:19.336 00:35:19.336 --- 10.0.0.1 ping statistics --- 00:35:19.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.336 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=588649 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 588649 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 588649 ']' 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.336 00:17:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 [2024-12-10 00:17:02.736792] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:19.336 [2024-12-10 00:17:02.736853] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.336 [2024-12-10 00:17:02.832223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:19.336 [2024-12-10 00:17:02.873508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.336 [2024-12-10 00:17:02.873547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.336 [2024-12-10 00:17:02.873557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.336 [2024-12-10 00:17:02.873566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.336 [2024-12-10 00:17:02.873574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.336 [2024-12-10 00:17:02.875187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.336 [2024-12-10 00:17:02.875296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.336 [2024-12-10 00:17:02.875297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 [2024-12-10 00:17:03.630678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 Malloc0 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:19.336 [2024-12-10 00:17:03.687534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.336 { 00:35:19.336 "params": { 00:35:19.336 "name": "Nvme$subsystem", 00:35:19.336 "trtype": "$TEST_TRANSPORT", 00:35:19.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.336 "adrfam": "ipv4", 00:35:19.336 "trsvcid": "$NVMF_PORT", 00:35:19.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.336 "hdgst": ${hdgst:-false}, 00:35:19.336 "ddgst": ${ddgst:-false} 00:35:19.336 }, 00:35:19.336 "method": "bdev_nvme_attach_controller" 00:35:19.336 } 00:35:19.336 EOF 00:35:19.336 )") 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:19.336 00:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:19.336 "params": { 00:35:19.336 "name": "Nvme1", 00:35:19.336 "trtype": "tcp", 00:35:19.336 "traddr": "10.0.0.2", 00:35:19.336 "adrfam": "ipv4", 00:35:19.336 "trsvcid": "4420", 00:35:19.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.336 "hdgst": false, 00:35:19.336 "ddgst": false 00:35:19.336 }, 00:35:19.336 "method": "bdev_nvme_attach_controller" 00:35:19.336 }' 00:35:19.336 [2024-12-10 00:17:03.743069] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:19.336 [2024-12-10 00:17:03.743118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588929 ] 00:35:19.596 [2024-12-10 00:17:03.833062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.596 [2024-12-10 00:17:03.872337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.856 Running I/O for 1 seconds... 00:35:20.792 11669.00 IOPS, 45.58 MiB/s 00:35:20.792 Latency(us) 00:35:20.792 [2024-12-09T23:17:05.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:20.792 Verification LBA range: start 0x0 length 0x4000 00:35:20.792 Nvme1n1 : 1.01 11724.16 45.80 0.00 0.00 10877.94 2306.87 9961.47 00:35:20.792 [2024-12-09T23:17:05.265Z] =================================================================================================================== 00:35:20.792 [2024-12-09T23:17:05.266Z] Total : 11724.16 45.80 0.00 0.00 10877.94 2306.87 9961.47 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=589198 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:20.793 { 00:35:20.793 "params": { 00:35:20.793 "name": "Nvme$subsystem", 00:35:20.793 "trtype": "$TEST_TRANSPORT", 00:35:20.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.793 "adrfam": "ipv4", 00:35:20.793 "trsvcid": "$NVMF_PORT", 00:35:20.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.793 "hdgst": ${hdgst:-false}, 00:35:20.793 "ddgst": ${ddgst:-false} 00:35:20.793 }, 00:35:20.793 "method": "bdev_nvme_attach_controller" 00:35:20.793 } 00:35:20.793 EOF 00:35:20.793 )") 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:20.793 00:17:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:20.793 "params": { 00:35:20.793 "name": "Nvme1", 00:35:20.793 "trtype": "tcp", 00:35:20.793 "traddr": "10.0.0.2", 00:35:20.793 "adrfam": "ipv4", 00:35:20.793 "trsvcid": "4420", 00:35:20.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.793 "hdgst": false, 00:35:20.793 "ddgst": false 00:35:20.793 }, 00:35:20.793 "method": "bdev_nvme_attach_controller" 00:35:20.793 }' 00:35:21.052 [2024-12-10 00:17:05.289995] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:21.052 [2024-12-10 00:17:05.290047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589198 ] 00:35:21.052 [2024-12-10 00:17:05.381374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.052 [2024-12-10 00:17:05.417398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.311 Running I/O for 15 seconds... 00:35:23.192 11772.00 IOPS, 45.98 MiB/s [2024-12-09T23:17:08.611Z] 11706.00 IOPS, 45.73 MiB/s [2024-12-09T23:17:08.611Z] 00:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 588649 00:35:24.138 00:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:24.138 [2024-12-10 00:17:08.264642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.264979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.264991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.265003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.265017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.138 [2024-12-10 00:17:08.265030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.138 [2024-12-10 00:17:08.265042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.139 [2024-12-10 00:17:08.265752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.139 [2024-12-10 00:17:08.265881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.139 [2024-12-10 00:17:08.265892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.140 [2024-12-10 00:17:08.265901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.265912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.265922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.265932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.265941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.265952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.265961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.265972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.265981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.265992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.140 [2024-12-10 00:17:08.266606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.140 [2024-12-10 00:17:08.266617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.141 [2024-12-10 00:17:08.266888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.141 [2024-12-10 00:17:08.266907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:24.141 [2024-12-10 00:17:08.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.266988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.266999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.141 [2024-12-10 00:17:08.267360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.141 [2024-12-10 00:17:08.267372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e342d0 is same with the state(6) to be set 00:35:24.141 [2024-12-10 00:17:08.267383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:24.141 [2024-12-10 00:17:08.267391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:24.142 [2024-12-10 00:17:08.267398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111248 len:8 PRP1 0x0 PRP2 0x0 00:35:24.142 [2024-12-10 00:17:08.267408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:24.142 [2024-12-10 00:17:08.270182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.270240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.271634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.271663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.271674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.271878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.272054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.272065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.272076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.272085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.283175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.283557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.283577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.283588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.283745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.283932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.283944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.283953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.283962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.295968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.296326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.296345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.296355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.296513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.296671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.296685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.296694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.296702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.308911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.309257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.309311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.309343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.309951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.310507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.310518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.310526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.310534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.321643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.322090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.322146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.322179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.322556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.322715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.322727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.322735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.322743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.334373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.334725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.334744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.334754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.334927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.335093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.335105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.335114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.335126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.347283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.347659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.347715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.347747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.348336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.348504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.348515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.348525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.348534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.360237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.360623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.360708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.361318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.361487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.361498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.361507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.361515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.373049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.373407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.373426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.373436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.142 [2024-12-10 00:17:08.373601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.142 [2024-12-10 00:17:08.373768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.142 [2024-12-10 00:17:08.373779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.142 [2024-12-10 00:17:08.373788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.142 [2024-12-10 00:17:08.373796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.142 [2024-12-10 00:17:08.385841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.142 [2024-12-10 00:17:08.386141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.142 [2024-12-10 00:17:08.386159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.142 [2024-12-10 00:17:08.386169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.386335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.386502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.386513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.386522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.386530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.398627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.398978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.398997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.399007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.399173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.399340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.399351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.399360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.399368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.411519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.411810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.411835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.411845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.412019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.412177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.412188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.412196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.412204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.424398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.424818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.424883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.424915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.425516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.426030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.426042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.426052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.426060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.437210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.437626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.437666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.437700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.438311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.438838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.438849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.438858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.438867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.450026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.450374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.450393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.450403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.450569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.450735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.450746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.450755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.450763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.462803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.463246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.463301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.463333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.463836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.463996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.464009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.464018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.464026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.475716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.476093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.476148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.476180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.476771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.477158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.477170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.477179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.143 [2024-12-10 00:17:08.477187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.143 [2024-12-10 00:17:08.488521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.143 [2024-12-10 00:17:08.488938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.143 [2024-12-10 00:17:08.488995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.143 [2024-12-10 00:17:08.489028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.143 [2024-12-10 00:17:08.489620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.143 [2024-12-10 00:17:08.490148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.143 [2024-12-10 00:17:08.490160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.143 [2024-12-10 00:17:08.490169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.490176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.501240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.501680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.501690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.501868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.502036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.502047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.502056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.502067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.514109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.514523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.514541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.514551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.514708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.514870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.514880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.514889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.514897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.527108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.527552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.527571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.527581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.527766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.527969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.527981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.527990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.527998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.540073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.540495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.540548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.540580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.541017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.541184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.541196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.541205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.541213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.552821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.553245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.553298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.553330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.553865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.554033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.554044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.554053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.554061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.565568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.565986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.566014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.566172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.566328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.566340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.566348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.566355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.578362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.578791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.578857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.578891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.579288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.579447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.579458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.579466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.579473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.591151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.144 [2024-12-10 00:17:08.591580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.144 [2024-12-10 00:17:08.591638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.144 [2024-12-10 00:17:08.591670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.144 [2024-12-10 00:17:08.592181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.144 [2024-12-10 00:17:08.592350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.144 [2024-12-10 00:17:08.592362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.144 [2024-12-10 00:17:08.592371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.144 [2024-12-10 00:17:08.592379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.144 [2024-12-10 00:17:08.604147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.604547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.604566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.604578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.604744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.604918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.604930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.604939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.604947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 [2024-12-10 00:17:08.616828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.617266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.617318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.617350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.617960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.618146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.618157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.618166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.618175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 [2024-12-10 00:17:08.629555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.629967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.629987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.629996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.630153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.630311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.630325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.630333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.630341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 [2024-12-10 00:17:08.642492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.642835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.642855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.642864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.643030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.643196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.643207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.643216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.643224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 [2024-12-10 00:17:08.655219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.655639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.655693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.655724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.656153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.656321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.656333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.656341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.656349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 10057.67 IOPS, 39.29 MiB/s [2024-12-09T23:17:08.878Z] [2024-12-10 00:17:08.669106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.669450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.669468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.669477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.669634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.669793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.669803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.669813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.669831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.405 [2024-12-10 00:17:08.681778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.405 [2024-12-10 00:17:08.682189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.405 [2024-12-10 00:17:08.682244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.405 [2024-12-10 00:17:08.682277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.405 [2024-12-10 00:17:08.682670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.405 [2024-12-10 00:17:08.682835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.405 [2024-12-10 00:17:08.682847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.405 [2024-12-10 00:17:08.682855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.405 [2024-12-10 00:17:08.682863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.694537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.694939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.694960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.694969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.695129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.695287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.695297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.695306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.695314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.707426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.707736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.707754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.707763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.707925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.708083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.708094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.708102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.708110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.720226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.720623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.720641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.720650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.720807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.720992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.721004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.721013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.721021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.732899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.733309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.733359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.733391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.733999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.734471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.734481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.734490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.734497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.745572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.745962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.745982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.745991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.746157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.746323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.746335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.746344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.746352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.758315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.758703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.758721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.758733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.758916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.759084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.759095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.759104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.759112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.771104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.771447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.406 [2024-12-10 00:17:08.771465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.406 [2024-12-10 00:17:08.771474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.406 [2024-12-10 00:17:08.771632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.406 [2024-12-10 00:17:08.771790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.406 [2024-12-10 00:17:08.771801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.406 [2024-12-10 00:17:08.771809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.406 [2024-12-10 00:17:08.771817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.406 [2024-12-10 00:17:08.783885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.406 [2024-12-10 00:17:08.784310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.784328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.784338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.784504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.784670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.784681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.784690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.784698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.796598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.797014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.797032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.797042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.797198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.797359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.797370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.797378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.797386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.809334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.809687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.809706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.809716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.809887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.810054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.810066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.810074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.810082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.822094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.822505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.822547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.822580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.823189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.823784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.823810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.823819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.823832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.834846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.835261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.835280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.835289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.835446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.835604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.835615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.835623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.835633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.847589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.847947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.847956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.848113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.848271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.848281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.848290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.848298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.407 [2024-12-10 00:17:08.860258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.407 [2024-12-10 00:17:08.860664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.407 [2024-12-10 00:17:08.860682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.407 [2024-12-10 00:17:08.860691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.407 [2024-12-10 00:17:08.860854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.407 [2024-12-10 00:17:08.861036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.407 [2024-12-10 00:17:08.861047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.407 [2024-12-10 00:17:08.861056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.407 [2024-12-10 00:17:08.861064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.408 [2024-12-10 00:17:08.873009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.408 [2024-12-10 00:17:08.873408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.408 [2024-12-10 00:17:08.873427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.408 [2024-12-10 00:17:08.873436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.408 [2024-12-10 00:17:08.873602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.408 [2024-12-10 00:17:08.873769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.408 [2024-12-10 00:17:08.873780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.408 [2024-12-10 00:17:08.873789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.408 [2024-12-10 00:17:08.873797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.885860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.886298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.886307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.886465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.886623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.886633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.886642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.886650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.898530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.898908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.898917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.899074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.899232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.899243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.899251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.899259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.911278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.911705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.911757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.911789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.912182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.912350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.912362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.912371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.912379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.923953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.924294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.924312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.924324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.924481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.924639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.924650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.924658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.924666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.936617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.937003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.937021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.937030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.937187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.937345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.937356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.937364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.937372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.949337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.949727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.949745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.949755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.949936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.950103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.950115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.950123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.950131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.962061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.962489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.962541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.962573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.668 [2024-12-10 00:17:08.963090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.668 [2024-12-10 00:17:08.963261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.668 [2024-12-10 00:17:08.963272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.668 [2024-12-10 00:17:08.963281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.668 [2024-12-10 00:17:08.963289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.668 [2024-12-10 00:17:08.974854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.668 [2024-12-10 00:17:08.975267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.668 [2024-12-10 00:17:08.975285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.668 [2024-12-10 00:17:08.975294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:08.975452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:08.975610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:08.975621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:08.975629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:08.975637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:08.987598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:08.987960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:08.988016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:08.988048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:08.988639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:08.989146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:08.989157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:08.989166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:08.989174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.000377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.000788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.000806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.000815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.001001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.001168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.001179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.001188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.001199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.013128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.013540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.013558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.013567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.013724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.013903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.013916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.013925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.013933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.025870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.026275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.026316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.026348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.026955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.027504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.027515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.027524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.027533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.038695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.039129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.039147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.039157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.039322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.039489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.039501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.039509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.039517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.051523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.051951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.052001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.052033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.052624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.053231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.053273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.053282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.053291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.064302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.064655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.064672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.064681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.064844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.065024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.065035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.065044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.065052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.076991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.077401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.077419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.077428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.077587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.077744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.077755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.077763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.077771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.089732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.090165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.090217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.090257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.090864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.669 [2024-12-10 00:17:09.091437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.669 [2024-12-10 00:17:09.091461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.669 [2024-12-10 00:17:09.091482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.669 [2024-12-10 00:17:09.091500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.669 [2024-12-10 00:17:09.104863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.669 [2024-12-10 00:17:09.105362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.669 [2024-12-10 00:17:09.105388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.669 [2024-12-10 00:17:09.105403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.669 [2024-12-10 00:17:09.105660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.670 [2024-12-10 00:17:09.105929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.670 [2024-12-10 00:17:09.105946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.670 [2024-12-10 00:17:09.105960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.670 [2024-12-10 00:17:09.105973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.670 [2024-12-10 00:17:09.117856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.670 [2024-12-10 00:17:09.118292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.670 [2024-12-10 00:17:09.118311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.670 [2024-12-10 00:17:09.118322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.670 [2024-12-10 00:17:09.118497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.670 [2024-12-10 00:17:09.118674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.670 [2024-12-10 00:17:09.118686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.670 [2024-12-10 00:17:09.118695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.670 [2024-12-10 00:17:09.118704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.670 [2024-12-10 00:17:09.130628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.670 [2024-12-10 00:17:09.130979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.670 [2024-12-10 00:17:09.131033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.670 [2024-12-10 00:17:09.131065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.670 [2024-12-10 00:17:09.131579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.670 [2024-12-10 00:17:09.131901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.670 [2024-12-10 00:17:09.131932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.670 [2024-12-10 00:17:09.131953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.670 [2024-12-10 00:17:09.131971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.145793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.146250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.146278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.146294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.146553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.146814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.146837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.146851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.146864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.158845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.159272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.159292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.159303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.159478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.159655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.159667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.159677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.159685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.171620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.172031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.172049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.172059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.172217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.172375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.172386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.172394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.172405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.184367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.184773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.184791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.184800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.184984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.185151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.185163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.185171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.185179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.197115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.197510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.197529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.197538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.197694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.197859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.197887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.197898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.197906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.209902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.210297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.930 [2024-12-10 00:17:09.210350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.930 [2024-12-10 00:17:09.210383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.930 [2024-12-10 00:17:09.210846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.930 [2024-12-10 00:17:09.211015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.930 [2024-12-10 00:17:09.211026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.930 [2024-12-10 00:17:09.211035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.930 [2024-12-10 00:17:09.211043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.930 [2024-12-10 00:17:09.222617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.930 [2024-12-10 00:17:09.223029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.223047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.223057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.223214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.223371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.223382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.223391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.223398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.235305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.235711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.235753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.235786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.236392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.236637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.236649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.236658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.236666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.248153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.248567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.248595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.248752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.248936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.248948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.248957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.248965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.260814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.261157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.261175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.261187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.261345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.261502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.261513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.261521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.261529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.273551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.273897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.273948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.273981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.274572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.274796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.274807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.274816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.274828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.286323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.286755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.286774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.286783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.286969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.287135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.287147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.287156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.287164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.299238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.299657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.299675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.299685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.299857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.300034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.300048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.300056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.300064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.311996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.312352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.312371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.312380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.312537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.312695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.312705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.312714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.312721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.324662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.325083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.325101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.325110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.325267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.325425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.325436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.325444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.325452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.337411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.337840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.931 [2024-12-10 00:17:09.337894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.931 [2024-12-10 00:17:09.337926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.931 [2024-12-10 00:17:09.338524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.931 [2024-12-10 00:17:09.339057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.931 [2024-12-10 00:17:09.339069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.931 [2024-12-10 00:17:09.339079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.931 [2024-12-10 00:17:09.339091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.931 [2024-12-10 00:17:09.350204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.931 [2024-12-10 00:17:09.350630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.932 [2024-12-10 00:17:09.350683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.932 [2024-12-10 00:17:09.350715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.932 [2024-12-10 00:17:09.351218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.932 [2024-12-10 00:17:09.351387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.932 [2024-12-10 00:17:09.351399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.932 [2024-12-10 00:17:09.351408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.932 [2024-12-10 00:17:09.351417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.932 [2024-12-10 00:17:09.363213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.932 [2024-12-10 00:17:09.363624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.932 [2024-12-10 00:17:09.363677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.932 [2024-12-10 00:17:09.363709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.932 [2024-12-10 00:17:09.364226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.932 [2024-12-10 00:17:09.364394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.932 [2024-12-10 00:17:09.364405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.932 [2024-12-10 00:17:09.364414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.932 [2024-12-10 00:17:09.364422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.932 [2024-12-10 00:17:09.375895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.932 [2024-12-10 00:17:09.376280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.932 [2024-12-10 00:17:09.376299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.932 [2024-12-10 00:17:09.376308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.932 [2024-12-10 00:17:09.376466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.932 [2024-12-10 00:17:09.376624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.932 [2024-12-10 00:17:09.376634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.932 [2024-12-10 00:17:09.376643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.932 [2024-12-10 00:17:09.376650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.932 [2024-12-10 00:17:09.388693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.932 [2024-12-10 00:17:09.389133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.932 [2024-12-10 00:17:09.389174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.932 [2024-12-10 00:17:09.389207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.932 [2024-12-10 00:17:09.389798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:24.932 [2024-12-10 00:17:09.390022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:24.932 [2024-12-10 00:17:09.390032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:24.932 [2024-12-10 00:17:09.390040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:24.932 [2024-12-10 00:17:09.390048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:24.932 [2024-12-10 00:17:09.401584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:24.932 [2024-12-10 00:17:09.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.932 [2024-12-10 00:17:09.402002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:24.932 [2024-12-10 00:17:09.402012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:24.932 [2024-12-10 00:17:09.402177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.196 [2024-12-10 00:17:09.402344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.196 [2024-12-10 00:17:09.402357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.196 [2024-12-10 00:17:09.402366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.196 [2024-12-10 00:17:09.402376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.196 [2024-12-10 00:17:09.414383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.196 [2024-12-10 00:17:09.414817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.196 [2024-12-10 00:17:09.414884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.196 [2024-12-10 00:17:09.414917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.196 [2024-12-10 00:17:09.415363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.196 [2024-12-10 00:17:09.415533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.196 [2024-12-10 00:17:09.415544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.196 [2024-12-10 00:17:09.415553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.196 [2024-12-10 00:17:09.415562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.196 [2024-12-10 00:17:09.427153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.196 [2024-12-10 00:17:09.427532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.196 [2024-12-10 00:17:09.427585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.196 [2024-12-10 00:17:09.427625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.196 [2024-12-10 00:17:09.428100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.196 [2024-12-10 00:17:09.428261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.196 [2024-12-10 00:17:09.428272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.196 [2024-12-10 00:17:09.428281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.196 [2024-12-10 00:17:09.428288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.196 [2024-12-10 00:17:09.439911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.196 [2024-12-10 00:17:09.440313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.196 [2024-12-10 00:17:09.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.196 [2024-12-10 00:17:09.440398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.196 [2024-12-10 00:17:09.440927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.196 [2024-12-10 00:17:09.441087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.441098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.441106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.441114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.452745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.453129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.453148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.453157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.453314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.453472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.453483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.453491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.453499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.465546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.465901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.465930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.466096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.466262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.466277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.466287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.466295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.478343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.478741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.478760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.478770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.478941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.479114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.479125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.479133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.479141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.491369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.491778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.491797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.491808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.491984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.492156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.492168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.492177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.492185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.504191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.504556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.504575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.504584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.504740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.504907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.504918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.504927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.504941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.516951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.517220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.517239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.517248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.517405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.517564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.517575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.517583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.517591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.529669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.530009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.530029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.530038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.530204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.530370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.530382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.530391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.530399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.542477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.542814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.542850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.543017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.543184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.543195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.543204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.543213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.555390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.555676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.555704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.555881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.556053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.556064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.197 [2024-12-10 00:17:09.556074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.197 [2024-12-10 00:17:09.556082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.197 [2024-12-10 00:17:09.568447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.197 [2024-12-10 00:17:09.568853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.197 [2024-12-10 00:17:09.568874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.197 [2024-12-10 00:17:09.568884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.197 [2024-12-10 00:17:09.569055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.197 [2024-12-10 00:17:09.569227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.197 [2024-12-10 00:17:09.569239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.569248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.569256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.581451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.581881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.581901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.581911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.582082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.582254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.582265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.582274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.582282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.594478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.594913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.594932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.594943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.595117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.595289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.595301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.595310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.595318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.607506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.607978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.608033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.608065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.608586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.608759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.608770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.608780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.608788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.620520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.620876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.620897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.620906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.621077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.621248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.621260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.621269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.621277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.633450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.633910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.633919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.634086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.634252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.634267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.634276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.634285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.646203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.646618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.646637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.646646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.646803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.646988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.647001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.647010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.647018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.198 [2024-12-10 00:17:09.658934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.198 [2024-12-10 00:17:09.659283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.198 [2024-12-10 00:17:09.659302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.198 [2024-12-10 00:17:09.659312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.198 [2024-12-10 00:17:09.659478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.198 [2024-12-10 00:17:09.659646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.198 [2024-12-10 00:17:09.659657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.198 [2024-12-10 00:17:09.659666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.198 [2024-12-10 00:17:09.659674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.461 7543.25 IOPS, 29.47 MiB/s [2024-12-09T23:17:09.934Z] [2024-12-10 00:17:09.672856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.461 [2024-12-10 00:17:09.673139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.461 [2024-12-10 00:17:09.673158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.461 [2024-12-10 00:17:09.673169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.461 [2024-12-10 00:17:09.673335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.461 [2024-12-10 00:17:09.673502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.461 [2024-12-10 00:17:09.673514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.461 [2024-12-10 00:17:09.673526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.461 [2024-12-10 00:17:09.673536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.461 [2024-12-10 00:17:09.685814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.461 [2024-12-10 00:17:09.686219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.461 [2024-12-10 00:17:09.686239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.461 [2024-12-10 00:17:09.686249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.461 [2024-12-10 00:17:09.686415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.461 [2024-12-10 00:17:09.686581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.461 [2024-12-10 00:17:09.686592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.461 [2024-12-10 00:17:09.686601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.461 [2024-12-10 00:17:09.686609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.461 [2024-12-10 00:17:09.698684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.461 [2024-12-10 00:17:09.699052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.461 [2024-12-10 00:17:09.699071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.461 [2024-12-10 00:17:09.699080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.461 [2024-12-10 00:17:09.699238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.461 [2024-12-10 00:17:09.699397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.461 [2024-12-10 00:17:09.699408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.461 [2024-12-10 00:17:09.699416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.461 [2024-12-10 00:17:09.699424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.461 [2024-12-10 00:17:09.711592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.461 [2024-12-10 00:17:09.711953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.461 [2024-12-10 00:17:09.711972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.461 [2024-12-10 00:17:09.711981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.461 [2024-12-10 00:17:09.712138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.712296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.712307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.712315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.712323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.724389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.724785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.724803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.724812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.724975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.725134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.725145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.725154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.725161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.737219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.737641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.737660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.737669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.737833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.738016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.738028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.738037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.738045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.750000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.750343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.750362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.750371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.750529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.750688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.750699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.750707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.750715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.762770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.763146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.763166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.763179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.763345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.763512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.763523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.763532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.763540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.775601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.776053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.776107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.776139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.776729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.776961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.776974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.776983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.776991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.788404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.788766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.788820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.788877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.789485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.790039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.790050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.790059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.790067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.801307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.801710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.801763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.801796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.802240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.802404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.802416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.802424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.802432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.814292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.814694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.814713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.814723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.814898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.815070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.815081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.815091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.815099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.827295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.827715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.827734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.462 [2024-12-10 00:17:09.827744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.462 [2024-12-10 00:17:09.827919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.462 [2024-12-10 00:17:09.828091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.462 [2024-12-10 00:17:09.828103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.462 [2024-12-10 00:17:09.828112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.462 [2024-12-10 00:17:09.828120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.462 [2024-12-10 00:17:09.840116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.462 [2024-12-10 00:17:09.840494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.462 [2024-12-10 00:17:09.840513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.840523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.840689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.840860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.840872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.840881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.840894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.852997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.853385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.853404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.853413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.853571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.853729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.853740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.853748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.853756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.865813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.866211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.866230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.866239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.866406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.866573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.866584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.866593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.866601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.878677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.878978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.878997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.879006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.879164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.879321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.879332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.879341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.879349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.891558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.891983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.892002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.892011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.892168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.892327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.892338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.892346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.892354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.904472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.904857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.904866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.905023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.905182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.905193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.905201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.905209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.917378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.917819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.917844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.917854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.918028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.918187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.918199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.918207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.918214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.463 [2024-12-10 00:17:09.930227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.463 [2024-12-10 00:17:09.930569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.463 [2024-12-10 00:17:09.930623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.463 [2024-12-10 00:17:09.930662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.463 [2024-12-10 00:17:09.931160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.463 [2024-12-10 00:17:09.931329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.463 [2024-12-10 00:17:09.931341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.463 [2024-12-10 00:17:09.931351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.463 [2024-12-10 00:17:09.931359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:09.943086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:09.943449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:09.943502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.725 [2024-12-10 00:17:09.943534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.725 [2024-12-10 00:17:09.944139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.725 [2024-12-10 00:17:09.944348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.725 [2024-12-10 00:17:09.944359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.725 [2024-12-10 00:17:09.944368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.725 [2024-12-10 00:17:09.944376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:09.955922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:09.956327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:09.956345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.725 [2024-12-10 00:17:09.956355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.725 [2024-12-10 00:17:09.956512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.725 [2024-12-10 00:17:09.956669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.725 [2024-12-10 00:17:09.956680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.725 [2024-12-10 00:17:09.956689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.725 [2024-12-10 00:17:09.956696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:09.968754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:09.969201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:09.969257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.725 [2024-12-10 00:17:09.969289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.725 [2024-12-10 00:17:09.969793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.725 [2024-12-10 00:17:09.969968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.725 [2024-12-10 00:17:09.969980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.725 [2024-12-10 00:17:09.969989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.725 [2024-12-10 00:17:09.969997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:09.981439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:09.981871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:09.981890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.725 [2024-12-10 00:17:09.981900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.725 [2024-12-10 00:17:09.982066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.725 [2024-12-10 00:17:09.982232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.725 [2024-12-10 00:17:09.982244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.725 [2024-12-10 00:17:09.982252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.725 [2024-12-10 00:17:09.982260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:09.994281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:09.994694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:09.994747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.725 [2024-12-10 00:17:09.994779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.725 [2024-12-10 00:17:09.995349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.725 [2024-12-10 00:17:09.995518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.725 [2024-12-10 00:17:09.995529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.725 [2024-12-10 00:17:09.995538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.725 [2024-12-10 00:17:09.995546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.725 [2024-12-10 00:17:10.007476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.725 [2024-12-10 00:17:10.007946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.725 [2024-12-10 00:17:10.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.008003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.008269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.008452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.008465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.008475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.008487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.020394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.020736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.020756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.020767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.020947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.021120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.021132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.021141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.021150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.033375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.033943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.034023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.034054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.034340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.034688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.034727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.034762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.034803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.046353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.046720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.046750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.046922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.047089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.047101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.047110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.047119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.059318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.059742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.059770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.059941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.060109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.060121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.060129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.060138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.072281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.072679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.072698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.072707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.072903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.073076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.073088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.073097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.073105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.085757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.086183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.086211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.086228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.086428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.086628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.086646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.086661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.086676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.099187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.099650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.099677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.099696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.099898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.100096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.100113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.100127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.100141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.112812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.113233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.113260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.113277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.113494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.113708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.113727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.113743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.113758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.125650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.126116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.126137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.126147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.726 [2024-12-10 00:17:10.126306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.726 [2024-12-10 00:17:10.126465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.726 [2024-12-10 00:17:10.126477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.726 [2024-12-10 00:17:10.126485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.726 [2024-12-10 00:17:10.126493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.726 [2024-12-10 00:17:10.138540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.726 [2024-12-10 00:17:10.138951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.726 [2024-12-10 00:17:10.138972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.726 [2024-12-10 00:17:10.138983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.727 [2024-12-10 00:17:10.139143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.727 [2024-12-10 00:17:10.139309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.727 [2024-12-10 00:17:10.139320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.727 [2024-12-10 00:17:10.139330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.727 [2024-12-10 00:17:10.139338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.727 [2024-12-10 00:17:10.151529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.727 [2024-12-10 00:17:10.151885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.727 [2024-12-10 00:17:10.151906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.727 [2024-12-10 00:17:10.151917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.727 [2024-12-10 00:17:10.152084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.727 [2024-12-10 00:17:10.152252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.727 [2024-12-10 00:17:10.152265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.727 [2024-12-10 00:17:10.152274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.727 [2024-12-10 00:17:10.152282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.727 [2024-12-10 00:17:10.164502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.727 [2024-12-10 00:17:10.164919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.727 [2024-12-10 00:17:10.164940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.727 [2024-12-10 00:17:10.164950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.727 [2024-12-10 00:17:10.165118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.727 [2024-12-10 00:17:10.165286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.727 [2024-12-10 00:17:10.165298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.727 [2024-12-10 00:17:10.165307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.727 [2024-12-10 00:17:10.165315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.727 [2024-12-10 00:17:10.177391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.727 [2024-12-10 00:17:10.177848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.727 [2024-12-10 00:17:10.177905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.727 [2024-12-10 00:17:10.177938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.727 [2024-12-10 00:17:10.178262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.727 [2024-12-10 00:17:10.178431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.727 [2024-12-10 00:17:10.178442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.727 [2024-12-10 00:17:10.178451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.727 [2024-12-10 00:17:10.178464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.727 [2024-12-10 00:17:10.190419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.727 [2024-12-10 00:17:10.190841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.727 [2024-12-10 00:17:10.190863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.727 [2024-12-10 00:17:10.190873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.727 [2024-12-10 00:17:10.191040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.727 [2024-12-10 00:17:10.191206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.727 [2024-12-10 00:17:10.191217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.727 [2024-12-10 00:17:10.191226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.727 [2024-12-10 00:17:10.191234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.988 [2024-12-10 00:17:10.203364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.988 [2024-12-10 00:17:10.203811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-12-10 00:17:10.203835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.988 [2024-12-10 00:17:10.203846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.988 [2024-12-10 00:17:10.204014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.988 [2024-12-10 00:17:10.204181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.988 [2024-12-10 00:17:10.204192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.988 [2024-12-10 00:17:10.204202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.988 [2024-12-10 00:17:10.204210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.988 [2024-12-10 00:17:10.216336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.988 [2024-12-10 00:17:10.216782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-12-10 00:17:10.216804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.988 [2024-12-10 00:17:10.216816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.988 [2024-12-10 00:17:10.217013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.988 [2024-12-10 00:17:10.217191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.988 [2024-12-10 00:17:10.217204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.988 [2024-12-10 00:17:10.217213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.988 [2024-12-10 00:17:10.217222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.988 [2024-12-10 00:17:10.229321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.988 [2024-12-10 00:17:10.229742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.988 [2024-12-10 00:17:10.229762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.988 [2024-12-10 00:17:10.229772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.988 [2024-12-10 00:17:10.229946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.988 [2024-12-10 00:17:10.230113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.230125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.230134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.230142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.242296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.242713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.242805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.243344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.243511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.243522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.243530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.243538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.255274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.255705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.255726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.255738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.255915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.256083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.256096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.256105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.256113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.268260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.268685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.268747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.268795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.269298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.269472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.269484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.269494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.269502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.281144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.281578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.281639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.281680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.282079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.282248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.282259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.282268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.282277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.293985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.294331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.294351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.294362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.294530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.294690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.294701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.294710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.294718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.306898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.307338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.307401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.307451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.308080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.308274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.308286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.308295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.308304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.319703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.320122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.320154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.320316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.320484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.320498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.320510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.320520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.332644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.333075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.333094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.333104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.333270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.333445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.333460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.333472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.333483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.345421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.345836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.345856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.345868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.346030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.346190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.989 [2024-12-10 00:17:10.346204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.989 [2024-12-10 00:17:10.346214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.989 [2024-12-10 00:17:10.346228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.989 [2024-12-10 00:17:10.358441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.989 [2024-12-10 00:17:10.358873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.989 [2024-12-10 00:17:10.358917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.989 [2024-12-10 00:17:10.358952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.989 [2024-12-10 00:17:10.359512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.989 [2024-12-10 00:17:10.359672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.359682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.359690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.359698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.371372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.371800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.371876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.371918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.372468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.372640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.372652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.372663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.372673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.384257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.384686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.384749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.384789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.385345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.385514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.385526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.385539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.385552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.397189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.397612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.397680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.397714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.398324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.398831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.398845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.398854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.398863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.410082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.410441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.410462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.410473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.410640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.410807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.410819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.410834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.410843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.423065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.423500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.423520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.423530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.423698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.423870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.423882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.423891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.423900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.435962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.436367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.436386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.436399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.436573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.436744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.436758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.436769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.436780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:25.990 [2024-12-10 00:17:10.448942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:25.990 [2024-12-10 00:17:10.449360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.990 [2024-12-10 00:17:10.449379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:25.990 [2024-12-10 00:17:10.449389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:25.990 [2024-12-10 00:17:10.449587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:25.990 [2024-12-10 00:17:10.449756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:25.990 [2024-12-10 00:17:10.449768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:25.990 [2024-12-10 00:17:10.449777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:25.990 [2024-12-10 00:17:10.449785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.251 [2024-12-10 00:17:10.461915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.251 [2024-12-10 00:17:10.462340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.251 [2024-12-10 00:17:10.462399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.251 [2024-12-10 00:17:10.462440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.251 [2024-12-10 00:17:10.462895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.251 [2024-12-10 00:17:10.463065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.251 [2024-12-10 00:17:10.463077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.251 [2024-12-10 00:17:10.463087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.251 [2024-12-10 00:17:10.463095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.251 [2024-12-10 00:17:10.474749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.251 [2024-12-10 00:17:10.475186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.475243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.475276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.475732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.475919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.475931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.475941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.475949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.487465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.487888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.487974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.488563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.488799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.488810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.488818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.488834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.500237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.500645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.500699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.500731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.501340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.501727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.501739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.501748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.501757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.512916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.513330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.513348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.513357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.513514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.513672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.513683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.513692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.513703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.525667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.526061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.526080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.526089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.526246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.526404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.526415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.526423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.526431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.538454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.538878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.538931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.538962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.539553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.539945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.539956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.539965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.539972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.551213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.551695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.551726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.552198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.552366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.552378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.552387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.552395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.563874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.564304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.564357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.564388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.564782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.564967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.564979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.564988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.564996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.576592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.577034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.577086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.577118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.577709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.578193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.578205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.578214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.578222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.589549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.589966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.589985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.252 [2024-12-10 00:17:10.589994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.252 [2024-12-10 00:17:10.590151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.252 [2024-12-10 00:17:10.590309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.252 [2024-12-10 00:17:10.590320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.252 [2024-12-10 00:17:10.590328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.252 [2024-12-10 00:17:10.590337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.252 [2024-12-10 00:17:10.602390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.252 [2024-12-10 00:17:10.602670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.252 [2024-12-10 00:17:10.602689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.602701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.602879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.603047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.603058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.603067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.603075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.615164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.615502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.615521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.615531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.615697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.615870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.615882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.615890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.615899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.627898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.628243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.628261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.628271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.628428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.628586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.628597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.628607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.628615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.640716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.641117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.641136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.641145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.641311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.641477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.641491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.641501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.641509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.653624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.653972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.653991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.654001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.654157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.654315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.654326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.654335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.654343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.666443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.666872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.666891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.666901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.667072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.667251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.667262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.667271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.667279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 6034.60 IOPS, 23.57 MiB/s [2024-12-09T23:17:10.726Z] [2024-12-10 00:17:10.679251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.679665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.679684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.679694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.679873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.680041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.680053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.680065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.680074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.692013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.692423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.692432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.692590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.692748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.692759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.692768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.692776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.704692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.705129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.705183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.705215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.705582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.705742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.705753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.705762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.705770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.253 [2024-12-10 00:17:10.717458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.253 [2024-12-10 00:17:10.717847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.253 [2024-12-10 00:17:10.717866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.253 [2024-12-10 00:17:10.717875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.253 [2024-12-10 00:17:10.718032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.253 [2024-12-10 00:17:10.718214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.253 [2024-12-10 00:17:10.718225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.253 [2024-12-10 00:17:10.718234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.253 [2024-12-10 00:17:10.718242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.730211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.730547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.730564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.730573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.730730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.730912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.730924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.730933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.730941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.742896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.743309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.743327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.743336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.743493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.743651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.743662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.743671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.743678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.755676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.756090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.756108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.756117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.756273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.756431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.756441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.756450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.756457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.768354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.768761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.768780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.768795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.768979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.769147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.769158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.769167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.769175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.781034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.781392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.781410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.781419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.781575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.781734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.781744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.781753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.781761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.793718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.794136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.794155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.794164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.794320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.794479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.794489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.794498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.794506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.806420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.806830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.806849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.806858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.807015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.807176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.807187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.807196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.807204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.819183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.819531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.819549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.819557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.819714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.819895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.819907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.819916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.819924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.831914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.832402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.832434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.514 [2024-12-10 00:17:10.833042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.514 [2024-12-10 00:17:10.833587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.514 [2024-12-10 00:17:10.833598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.514 [2024-12-10 00:17:10.833607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.514 [2024-12-10 00:17:10.833615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.514 [2024-12-10 00:17:10.844911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.514 [2024-12-10 00:17:10.845336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.514 [2024-12-10 00:17:10.845354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.514 [2024-12-10 00:17:10.845363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.845529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.845695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.845707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.845719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.845728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.857657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.858071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.858098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.858256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.858414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.858425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.858433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.858441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.870394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.870731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.870749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.870758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.870940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.871106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.871118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.871127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.871135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.883116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.883471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.883523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.883554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.883989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.884149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.884160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.884169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.884177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.895840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.896246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.896298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.896330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.896834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.896993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.897003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.897012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.897019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.908592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.908991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.909011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.909021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.909186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.909353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.909365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.909374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.909382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.921353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.921713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.921730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.921739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.921919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.922085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.922094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.922103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.922111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.934167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.934576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.934628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.934667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.935060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.935228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.935239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.935248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.935256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.946878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.947301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.947354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.947386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.947915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.948084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.948095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.948104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.948113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.959603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.960026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.960111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.960636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.960795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.515 [2024-12-10 00:17:10.960806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.515 [2024-12-10 00:17:10.960815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.515 [2024-12-10 00:17:10.960828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.515 [2024-12-10 00:17:10.972390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.515 [2024-12-10 00:17:10.972784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.515 [2024-12-10 00:17:10.972802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.515 [2024-12-10 00:17:10.972811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.515 [2024-12-10 00:17:10.972996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.515 [2024-12-10 00:17:10.973166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.516 [2024-12-10 00:17:10.973177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.516 [2024-12-10 00:17:10.973186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.516 [2024-12-10 00:17:10.973194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.516 [2024-12-10 00:17:10.985300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.516 [2024-12-10 00:17:10.985722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.516 [2024-12-10 00:17:10.985775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.516 [2024-12-10 00:17:10.985807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:10.986412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:10.987024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:10.987036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:10.987045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:10.987053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:10.998046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:10.998453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:10.998471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:10.998481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:10.998638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:10.998796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:10.998806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:10.998815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:10.998829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.010842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.011194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.011212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.011221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.011378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.011535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.011546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.011558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.011567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.023580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.024011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.024064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.024097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.024687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.025113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.025125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.025134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.025143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.036375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.037485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.037510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.037521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.037696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.037871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.037883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.037892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.037901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.049146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.049550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.049602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.049636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.050157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.050329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.050340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.050349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.050357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.061964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.062322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.062341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.062350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.062508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.062665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.062676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.062684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.062692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.074778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.075108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.776 [2024-12-10 00:17:11.075137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.776 [2024-12-10 00:17:11.075295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.776 [2024-12-10 00:17:11.075453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.776 [2024-12-10 00:17:11.075464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.776 [2024-12-10 00:17:11.075475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.776 [2024-12-10 00:17:11.075483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.776 [2024-12-10 00:17:11.087660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.776 [2024-12-10 00:17:11.088007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.776 [2024-12-10 00:17:11.088025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.088035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.088192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.088351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.088363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.088372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.088380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.100452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.100887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.100907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.100919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.101077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.101235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.101247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.101255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.101263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.113468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.113882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.113902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.113912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.114083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.114253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.114265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.114274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.114282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.126216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.126638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.126687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.126719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.127296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.127456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.127467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.127476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.127484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.139047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.139460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.140076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.140239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.140250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.140259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.140267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.151889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.152179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.152199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.152209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.152375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.152542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.152554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.152562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.152572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.164652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.164914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.164933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.164942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.165099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.165257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.165269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.165277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.165284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.177424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.177799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.177817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.177832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.178013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.178180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.178191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.178203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.178212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.190241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.190570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.190589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.190599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.190765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.190939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.190951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.190960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.190968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.203280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.203672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.777 [2024-12-10 00:17:11.203731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.777 [2024-12-10 00:17:11.203765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.777 [2024-12-10 00:17:11.204377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.777 [2024-12-10 00:17:11.204790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.777 [2024-12-10 00:17:11.204802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.777 [2024-12-10 00:17:11.204813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.777 [2024-12-10 00:17:11.204837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.777 [2024-12-10 00:17:11.216221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.777 [2024-12-10 00:17:11.216502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.778 [2024-12-10 00:17:11.216568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.778 [2024-12-10 00:17:11.216602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.778 [2024-12-10 00:17:11.217214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.778 [2024-12-10 00:17:11.217409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.778 [2024-12-10 00:17:11.217421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.778 [2024-12-10 00:17:11.217430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.778 [2024-12-10 00:17:11.217439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.778 [2024-12-10 00:17:11.228931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.778 [2024-12-10 00:17:11.229368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.778 [2024-12-10 00:17:11.229422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.778 [2024-12-10 00:17:11.229455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.778 [2024-12-10 00:17:11.230063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.778 [2024-12-10 00:17:11.230632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.778 [2024-12-10 00:17:11.230643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.778 [2024-12-10 00:17:11.230653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.778 [2024-12-10 00:17:11.230661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:26.778 [2024-12-10 00:17:11.241685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:26.778 [2024-12-10 00:17:11.242120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.778 [2024-12-10 00:17:11.242140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:26.778 [2024-12-10 00:17:11.242150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:26.778 [2024-12-10 00:17:11.242315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:26.778 [2024-12-10 00:17:11.242483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:26.778 [2024-12-10 00:17:11.242495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:26.778 [2024-12-10 00:17:11.242503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:26.778 [2024-12-10 00:17:11.242512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 588649 Killed "${NVMF_APP[@]}" "$@" 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.038 [2024-12-10 00:17:11.254674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.038 [2024-12-10 00:17:11.254963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.038 [2024-12-10 00:17:11.254983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.038 [2024-12-10 00:17:11.254993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.038 [2024-12-10 00:17:11.255164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.038 [2024-12-10 00:17:11.255336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.038 [2024-12-10 00:17:11.255347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.038 [2024-12-10 00:17:11.255356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.038 [2024-12-10 00:17:11.255368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=590093 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 590093 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 590093 ']' 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.038 00:17:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.038 [2024-12-10 00:17:11.267579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.038 [2024-12-10 00:17:11.267909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.038 [2024-12-10 00:17:11.267929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.038 [2024-12-10 00:17:11.267939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.038 [2024-12-10 00:17:11.268111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.038 [2024-12-10 00:17:11.268282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.038 [2024-12-10 00:17:11.268294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.038 [2024-12-10 00:17:11.268305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.038 [2024-12-10 00:17:11.268314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.038 [2024-12-10 00:17:11.280525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.038 [2024-12-10 00:17:11.280863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.038 [2024-12-10 00:17:11.280884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.038 [2024-12-10 00:17:11.280894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.038 [2024-12-10 00:17:11.281065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.038 [2024-12-10 00:17:11.281237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.038 [2024-12-10 00:17:11.281249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.038 [2024-12-10 00:17:11.281258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.281266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.293469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.293747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.293780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.293957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.294128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.294140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.294149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.294157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.306328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.306625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.306645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.306654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.306832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.307014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.307026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.307034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.307043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.313450] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:27.039 [2024-12-10 00:17:11.313495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.039 [2024-12-10 00:17:11.319420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.319791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.319820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.320010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.320193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.320205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.320215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.320223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.332343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.332676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.332700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.332710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.332886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.333066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.333077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.333086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.333095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.345348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.345749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.345768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.345778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.345968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.346141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.346152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.346161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.346169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.358388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.358724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.358743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.358753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.358930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.359102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.359114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.359123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.359131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.371330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.371615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.371634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.371644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.371819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.372006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.372018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.372027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.372035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.384202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.384537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.384556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.384566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.384732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.384904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.384916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.384925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.384933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.397128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.397410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.397430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.397439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.397605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.397772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.397783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.039 [2024-12-10 00:17:11.397792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.039 [2024-12-10 00:17:11.397800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.039 [2024-12-10 00:17:11.410007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.039 [2024-12-10 00:17:11.410293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.039 [2024-12-10 00:17:11.410312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.039 [2024-12-10 00:17:11.410322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.039 [2024-12-10 00:17:11.410492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.039 [2024-12-10 00:17:11.410663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.039 [2024-12-10 00:17:11.410678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.410687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.410696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.410860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.040 [2024-12-10 00:17:11.422945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.423298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.423319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.423329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.423496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.423663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.423675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.423684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.423693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.435905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.436235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.436254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.436264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.436432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.436598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.436610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.436618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.436627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.448949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.449327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.449347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.449357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.449527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.449698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.449710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.449719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.449733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.453000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.040 [2024-12-10 00:17:11.453025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.040 [2024-12-10 00:17:11.453035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.040 [2024-12-10 00:17:11.453043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.040 [2024-12-10 00:17:11.453050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.040 [2024-12-10 00:17:11.454461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.040 [2024-12-10 00:17:11.454570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.040 [2024-12-10 00:17:11.454572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.040 [2024-12-10 00:17:11.461956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.462334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.462356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.462367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.462539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.462712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.462724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.462734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.462742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.474983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.475298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.475321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.475332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.475504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.475677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.475689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.475699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.475707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.487912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.488345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.488366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.488377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.488554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.488727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.488738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.488748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.488757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.040 [2024-12-10 00:17:11.500958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.040 [2024-12-10 00:17:11.501316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.040 [2024-12-10 00:17:11.501338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.040 [2024-12-10 00:17:11.501348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.040 [2024-12-10 00:17:11.501521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.040 [2024-12-10 00:17:11.501692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.040 [2024-12-10 00:17:11.501704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.040 [2024-12-10 00:17:11.501715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.040 [2024-12-10 00:17:11.501724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.301 [2024-12-10 00:17:11.513946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.301 [2024-12-10 00:17:11.514343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-12-10 00:17:11.514364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.301 [2024-12-10 00:17:11.514375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.301 [2024-12-10 00:17:11.514548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.301 [2024-12-10 00:17:11.514719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.301 [2024-12-10 00:17:11.514731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.301 [2024-12-10 00:17:11.514740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.301 [2024-12-10 00:17:11.514748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.301 [2024-12-10 00:17:11.526959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.301 [2024-12-10 00:17:11.527394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-12-10 00:17:11.527415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.301 [2024-12-10 00:17:11.527425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.301 [2024-12-10 00:17:11.527597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.301 [2024-12-10 00:17:11.527769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.301 [2024-12-10 00:17:11.527785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.301 [2024-12-10 00:17:11.527794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.301 [2024-12-10 00:17:11.527803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.301 [2024-12-10 00:17:11.540007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.301 [2024-12-10 00:17:11.540366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.301 [2024-12-10 00:17:11.540388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.301 [2024-12-10 00:17:11.540398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.301 [2024-12-10 00:17:11.540570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.301 [2024-12-10 00:17:11.540741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.301 [2024-12-10 00:17:11.540752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.301 [2024-12-10 00:17:11.540761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.540770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.552968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.553329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.553339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.553510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.553681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.553693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.553702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.553711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.565910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.566312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.566331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.566341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.566512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.566682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.566695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.566704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.566716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.578849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.579218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.579237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.579246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.579417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.579588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.579599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.579608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.579617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.591827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.592132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.592142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.592312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.592483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.592495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.592504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.592512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.604874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.605206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.605236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.605407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.605578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.605590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.605599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.605608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.617807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.618119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.618129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.618301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.618472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.618484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.618493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.618502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.630719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.631048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.631067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.631077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.631248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.631420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.631432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.631441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.631449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.643644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.644058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.644077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.644087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.644258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.644428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.644440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.644449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.644457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.656635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.657060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.657079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.657089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.657263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.657434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.657445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.657454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.657463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 [2024-12-10 00:17:11.669645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.302 [2024-12-10 00:17:11.670003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.302 [2024-12-10 00:17:11.670023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.302 [2024-12-10 00:17:11.670033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.302 [2024-12-10 00:17:11.670203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.302 [2024-12-10 00:17:11.670374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.302 [2024-12-10 00:17:11.670385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.302 [2024-12-10 00:17:11.670396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.302 [2024-12-10 00:17:11.670406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.302 5028.83 IOPS, 19.64 MiB/s [2024-12-09T23:17:11.775Z] [2024-12-10 00:17:11.682551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.682956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.682976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.682986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.683157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.683328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.683340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.683350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.683359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.695535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.695899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.695929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.696100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.696271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.696286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.696295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.696304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.708505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.708934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.708955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.708965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.709136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.709306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.709318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.709328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.709337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.721532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.721938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.721958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.721968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.722139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.722317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.722329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.722338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.722346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.734528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.734937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.734957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.734966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.735138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.735309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.735321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.735330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.735342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.747510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.747936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.747956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.747965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.748136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.748307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.748319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.748327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.748336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.303 [2024-12-10 00:17:11.760515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.303 [2024-12-10 00:17:11.760932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.303 [2024-12-10 00:17:11.760952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.303 [2024-12-10 00:17:11.760962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.303 [2024-12-10 00:17:11.761133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.303 [2024-12-10 00:17:11.761304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.303 [2024-12-10 00:17:11.761316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.303 [2024-12-10 00:17:11.761325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.303 [2024-12-10 00:17:11.761333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.568 [2024-12-10 00:17:11.773502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.568 [2024-12-10 00:17:11.773947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.568 [2024-12-10 00:17:11.773967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.568 [2024-12-10 00:17:11.773977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.568 [2024-12-10 00:17:11.774149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.568 [2024-12-10 00:17:11.774320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.568 [2024-12-10 00:17:11.774331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.568 [2024-12-10 00:17:11.774340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.568 [2024-12-10 00:17:11.774349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.568 [2024-12-10 00:17:11.786509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.568 [2024-12-10 00:17:11.786937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.568 [2024-12-10 00:17:11.786955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.568 [2024-12-10 00:17:11.786965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.787136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.787307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.787318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.787327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.787335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.799512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.799942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.799962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.799972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.800142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.800313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.800325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.800334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.800342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.812516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.812943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.812963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.812973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.813144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.813315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.813327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.813336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.813344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.825512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.825933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.825953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.825963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.826132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.826299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.826310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.826319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.826327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.838495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.838913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.838932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.838942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.839113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.839285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.839296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.839305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.839313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.851496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.851927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.851947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.851957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.852126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.852298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.852309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.852318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.852327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.864521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.864971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.569 [2024-12-10 00:17:11.864991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.569 [2024-12-10 00:17:11.865001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.569 [2024-12-10 00:17:11.865172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.569 [2024-12-10 00:17:11.865343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.569 [2024-12-10 00:17:11.865358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.569 [2024-12-10 00:17:11.865367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.569 [2024-12-10 00:17:11.865377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.569 [2024-12-10 00:17:11.877550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.569 [2024-12-10 00:17:11.877982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.878002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.878012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.878183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.878354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.878366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.878374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.878383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.890562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.890963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.890983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.890993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.891163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.891334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.891345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.891354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.891362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.903523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.903863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.903882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.903892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.904062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.904233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.904245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.904253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.904265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.916445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.916869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.916889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.916899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.917070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.917241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.917253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.917262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.917270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.929448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.929854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.929873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.929883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.930054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.930225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.930237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.930246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.930255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.942413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.942841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.942861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.942871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.943041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.943212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.943224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.570 [2024-12-10 00:17:11.943233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.570 [2024-12-10 00:17:11.943241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.570 [2024-12-10 00:17:11.955419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.570 [2024-12-10 00:17:11.955834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.570 [2024-12-10 00:17:11.955852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.570 [2024-12-10 00:17:11.955863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.570 [2024-12-10 00:17:11.956034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.570 [2024-12-10 00:17:11.956205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.570 [2024-12-10 00:17:11.956216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:11.956225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:11.956233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:11.968398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:11.968799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:11.968818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:11.968832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.571 [2024-12-10 00:17:11.969002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.571 [2024-12-10 00:17:11.969173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.571 [2024-12-10 00:17:11.969184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:11.969192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:11.969200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:11.981367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:11.981794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:11.981812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:11.981827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.571 [2024-12-10 00:17:11.981997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.571 [2024-12-10 00:17:11.982168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.571 [2024-12-10 00:17:11.982179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:11.982188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:11.982196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:11.994361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:11.994708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:11.994727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:11.994736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.571 [2024-12-10 00:17:11.994914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.571 [2024-12-10 00:17:11.995086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.571 [2024-12-10 00:17:11.995097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:11.995106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:11.995114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:12.007289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:12.007695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:12.007713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:12.007722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.571 [2024-12-10 00:17:12.007896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.571 [2024-12-10 00:17:12.008066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.571 [2024-12-10 00:17:12.008077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:12.008086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:12.008094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:12.020252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:12.020686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:12.020704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:12.020714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.571 [2024-12-10 00:17:12.020888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.571 [2024-12-10 00:17:12.021060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.571 [2024-12-10 00:17:12.021071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.571 [2024-12-10 00:17:12.021079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.571 [2024-12-10 00:17:12.021087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.571 [2024-12-10 00:17:12.033259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.571 [2024-12-10 00:17:12.033704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.571 [2024-12-10 00:17:12.033723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.571 [2024-12-10 00:17:12.033732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.572 [2024-12-10 00:17:12.033907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.572 [2024-12-10 00:17:12.034078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.572 [2024-12-10 00:17:12.034092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.572 [2024-12-10 00:17:12.034101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.572 [2024-12-10 00:17:12.034109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.835 [2024-12-10 00:17:12.046292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.835 [2024-12-10 00:17:12.046717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.835 [2024-12-10 00:17:12.046736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.835 [2024-12-10 00:17:12.046745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.835 [2024-12-10 00:17:12.046920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.835 [2024-12-10 00:17:12.047092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.835 [2024-12-10 00:17:12.047102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.835 [2024-12-10 00:17:12.047112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.835 [2024-12-10 00:17:12.047120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.059283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.059713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.059731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.059740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.059913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.060084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.060094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.060103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.060111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.072280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.072639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.072658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.072667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.072842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.073014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.073025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.073035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.073047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.085210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.085632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.085650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.085660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.085835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.086007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.086018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.086027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.086036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.098213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.098667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.098677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.098851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.099023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.099034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.099043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.099051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.111230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.111660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.111678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.111688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.111862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.112035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.112046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.112055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.112064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.124261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.124717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.124735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.124745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.124920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.125090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.125101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.125110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.125118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.137296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.137720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.137738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.137747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.137922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.138094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.138104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.138113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.138121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.836 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:27.836 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.836 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.836 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.836 [2024-12-10 00:17:12.150208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.150644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.150663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.150673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.150847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.151019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.151032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.151041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.151050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.163236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.163569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.163588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.163598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.163769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.163947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.836 [2024-12-10 00:17:12.163961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.836 [2024-12-10 00:17:12.163971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.836 [2024-12-10 00:17:12.163980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.836 [2024-12-10 00:17:12.176153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.836 [2024-12-10 00:17:12.176529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.836 [2024-12-10 00:17:12.176547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.836 [2024-12-10 00:17:12.176556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.836 [2024-12-10 00:17:12.176726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.836 [2024-12-10 00:17:12.176905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.176916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.176925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.176933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 [2024-12-10 00:17:12.189105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.189535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.189554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.189563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.189733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.189909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.189920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.189929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.189938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.837 [2024-12-10 00:17:12.200831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.837 [2024-12-10 00:17:12.202157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.202589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.202608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.202618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.202789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.202965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.202977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.202988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.202996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.837 [2024-12-10 00:17:12.215201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.215619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.215637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.215647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.215817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.215997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.216008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.216017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.216025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 [2024-12-10 00:17:12.228218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.228648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.228667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.228677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.228852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.229023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.229034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.229047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.229055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 Malloc0 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.837 [2024-12-10 00:17:12.241257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.241639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.241657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.241667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.241842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.242013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.242024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.242032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.242041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.837 [2024-12-10 00:17:12.254211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.254621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.837 [2024-12-10 00:17:12.254639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e760 with addr=10.0.0.2, port=4420 00:35:27.837 [2024-12-10 00:17:12.254648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e760 is same with the state(6) to be set 00:35:27.837 [2024-12-10 00:17:12.254818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e760 (9): Bad file descriptor 00:35:27.837 [2024-12-10 00:17:12.254993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:27.837 [2024-12-10 00:17:12.255004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:27.837 [2024-12-10 00:17:12.255013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:27.837 [2024-12-10 00:17:12.255021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.837 [2024-12-10 00:17:12.260649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.837 00:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 589198 00:35:27.837 [2024-12-10 00:17:12.267198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:27.837 [2024-12-10 00:17:12.297664] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:29.342 4939.57 IOPS, 19.30 MiB/s [2024-12-09T23:17:14.759Z] 5784.25 IOPS, 22.59 MiB/s [2024-12-09T23:17:15.696Z] 6466.78 IOPS, 25.26 MiB/s [2024-12-09T23:17:17.076Z] 6984.50 IOPS, 27.28 MiB/s [2024-12-09T23:17:18.013Z] 7434.64 IOPS, 29.04 MiB/s [2024-12-09T23:17:18.954Z] 7780.75 IOPS, 30.39 MiB/s [2024-12-09T23:17:19.892Z] 8094.31 IOPS, 31.62 MiB/s [2024-12-09T23:17:20.827Z] 8343.86 IOPS, 32.59 MiB/s [2024-12-09T23:17:20.827Z] 8563.47 IOPS, 33.45 MiB/s 00:35:36.354 Latency(us) 00:35:36.354 [2024-12-09T23:17:20.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.354 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:36.354 Verification LBA range: start 0x0 length 0x4000 00:35:36.354 Nvme1n1 : 15.05 8544.03 33.38 13376.81 0.00 5805.09 445.64 44249.91 00:35:36.354 [2024-12-09T23:17:20.827Z] =================================================================================================================== 00:35:36.354 [2024-12-09T23:17:20.827Z] Total : 8544.03 33.38 13376.81 0.00 5805.09 445.64 44249.91 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.613 rmmod nvme_tcp 00:35:36.613 rmmod nvme_fabrics 00:35:36.613 rmmod nvme_keyring 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 590093 ']' 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 590093 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 590093 ']' 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 590093 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.613 00:17:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590093 00:35:36.613 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:36.613 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:36.613 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590093' 00:35:36.613 killing process with pid 590093 00:35:36.613 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 590093 00:35:36.613 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 590093 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.873 00:17:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:39.415 00:35:39.415 real 0m28.163s 00:35:39.415 user 1m2.786s 00:35:39.415 sys 0m8.656s 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.415 ************************************ 00:35:39.415 END TEST nvmf_bdevperf 00:35:39.415 ************************************ 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.415 ************************************ 00:35:39.415 START TEST nvmf_target_disconnect 00:35:39.415 ************************************ 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:39.415 * Looking for test storage... 00:35:39.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:39.415 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.416 --rc genhtml_branch_coverage=1 00:35:39.416 --rc genhtml_function_coverage=1 00:35:39.416 --rc genhtml_legend=1 00:35:39.416 --rc geninfo_all_blocks=1 00:35:39.416 --rc geninfo_unexecuted_blocks=1 00:35:39.416 00:35:39.416 ' 00:35:39.416 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:39.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.416 --rc genhtml_branch_coverage=1 00:35:39.416 --rc genhtml_function_coverage=1 00:35:39.416 --rc genhtml_legend=1 00:35:39.416 --rc geninfo_all_blocks=1 00:35:39.416 --rc geninfo_unexecuted_blocks=1 00:35:39.417 00:35:39.417 ' 00:35:39.417 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:39.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.417 --rc genhtml_branch_coverage=1 00:35:39.417 --rc genhtml_function_coverage=1 00:35:39.417 --rc genhtml_legend=1 00:35:39.417 --rc geninfo_all_blocks=1 00:35:39.417 --rc geninfo_unexecuted_blocks=1 00:35:39.417 00:35:39.417 ' 00:35:39.417 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:39.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.417 --rc genhtml_branch_coverage=1 00:35:39.417 --rc genhtml_function_coverage=1 00:35:39.417 --rc genhtml_legend=1 00:35:39.417 --rc geninfo_all_blocks=1 00:35:39.417 --rc geninfo_unexecuted_blocks=1 00:35:39.417 00:35:39.417 ' 00:35:39.417 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.417 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.418 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.420 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.420 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.420 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:39.420 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:39.420 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.421 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.422 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:39.423 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:39.424 00:17:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:47.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:47.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:47.556 Found net devices under 0000:af:00.0: cvl_0_0 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:47.556 Found net devices under 0000:af:00.1: cvl_0_1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:47.556 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:47.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:35:47.556 00:35:47.556 --- 10.0.0.2 ping statistics --- 00:35:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.557 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:35:47.557 00:35:47.557 --- 10.0.0.1 ping statistics --- 00:35:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.557 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:47.557 ************************************ 00:35:47.557 START TEST nvmf_target_disconnect_tc1 00:35:47.557 ************************************ 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:47.557 00:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:47.557 [2024-12-10 00:17:31.115980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.557 [2024-12-10 00:17:31.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527ee0 with addr=10.0.0.2, port=4420 00:35:47.557 [2024-12-10 00:17:31.116059] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:47.557 [2024-12-10 00:17:31.116076] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:47.557 [2024-12-10 00:17:31.116085] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:47.557 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:47.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:47.557 Initializing NVMe Controllers 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:47.557 00:35:47.557 real 0m0.143s 00:35:47.557 user 0m0.059s 00:35:47.557 sys 0m0.084s 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:47.557 ************************************ 00:35:47.557 END TEST nvmf_target_disconnect_tc1 00:35:47.557 ************************************ 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:47.557 ************************************ 00:35:47.557 START TEST nvmf_target_disconnect_tc2 00:35:47.557 ************************************ 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=595551 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 595551 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 595551 ']' 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.557 00:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.557 [2024-12-10 00:17:31.266821] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:47.557 [2024-12-10 00:17:31.266878] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.557 [2024-12-10 00:17:31.362193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.557 [2024-12-10 00:17:31.402772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.557 [2024-12-10 00:17:31.402812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.557 [2024-12-10 00:17:31.402826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.557 [2024-12-10 00:17:31.402835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.557 [2024-12-10 00:17:31.402842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.557 [2024-12-10 00:17:31.404458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:47.557 [2024-12-10 00:17:31.404568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:47.557 [2024-12-10 00:17:31.404678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:47.557 [2024-12-10 00:17:31.404679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 Malloc0 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 [2024-12-10 00:17:32.182101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 [2024-12-10 00:17:32.210335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=595600 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:47.818 00:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.386 00:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 595551 00:35:50.386 00:17:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Write completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.386 starting I/O failed 00:35:50.386 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 [2024-12-10 00:17:34.248687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 [2024-12-10 00:17:34.248917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 [2024-12-10 00:17:34.249133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Write completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 Read completed with error (sct=0, sc=8) 00:35:50.387 starting I/O failed 00:35:50.387 [2024-12-10 00:17:34.249368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.387 [2024-12-10 00:17:34.249675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.387 [2024-12-10 00:17:34.249820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.249837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.387 [2024-12-10 00:17:34.249994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.250005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.387 [2024-12-10 00:17:34.250097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.250108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.387 [2024-12-10 00:17:34.250286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.250297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.387 [2024-12-10 00:17:34.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.387 [2024-12-10 00:17:34.250473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.387 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.250576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.250588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.250691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.250703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.250802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.250813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.250925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.250937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.251972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.251983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.252859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.252990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.253862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.253874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.254925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.254937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.388 [2024-12-10 00:17:34.255038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.388 [2024-12-10 00:17:34.255050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.388 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.255951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.255964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.256924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.257951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.257968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.258980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.258995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.259096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.259194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.259354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.259453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.389 [2024-12-10 00:17:34.259559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.389 qpair failed and we were unable to recover it. 00:35:50.389 [2024-12-10 00:17:34.259646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.259736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.259751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.259849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.259864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.259935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.259950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.260968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.260988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.261069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.261085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.261189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.261204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.261366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.261382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.261560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.261601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.261806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.261862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.262016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.262056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.262256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.262520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.262561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.262747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.262764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.262937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.262955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.263970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.263987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.390 [2024-12-10 00:17:34.264586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.390 qpair failed and we were unable to recover it. 00:35:50.390 [2024-12-10 00:17:34.264774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.264892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.264983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.265773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.265789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.266964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.266982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.267933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.267955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.268782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.268962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.269011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.269156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.269195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.269397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.391 [2024-12-10 00:17:34.269437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.391 qpair failed and we were unable to recover it. 00:35:50.391 [2024-12-10 00:17:34.269631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.269672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.269881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.269923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.270069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.270109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.270234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.270255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.270362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.270384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.270546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.270585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.270797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.270852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.271056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.271097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.271288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.271309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.271534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.271575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.271789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.271841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.272064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.272105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.272230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.272271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.272531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.272572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.272766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.272806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.272989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.273148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.273189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.273336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.273377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.273601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.273641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.273854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.273910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.274217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.274345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.274386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.274522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.274562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.274756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.274797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.275814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.275842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.276950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.276971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.277152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.392 qpair failed and we were unable to recover it. 00:35:50.392 [2024-12-10 00:17:34.277402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.392 [2024-12-10 00:17:34.277443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.277595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.277635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.277779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.277820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.278273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.278511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.278650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.278858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.279006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.279172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.279200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.279364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.279393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.279533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.279573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.279764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.279804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.279969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.280015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.280163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.280203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.280415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.280455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.280660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.280700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.280871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.281040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.281069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.281198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.281226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.281508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.281635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.281663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.281858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.281907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.282101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.282141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.282282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.282322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.282471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.282499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.282668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.282696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.284211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.284261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.284563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.284594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.285975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.286020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.286192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.286223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.286432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.286462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.288502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.288570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.288848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.288895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.289111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.393 [2024-12-10 00:17:34.289154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.393 qpair failed and we were unable to recover it. 00:35:50.393 [2024-12-10 00:17:34.289356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.289396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.289675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.289716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.289968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.290010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.290222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.290263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.290552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.290592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.290810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.290870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.291085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.291126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.291431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.291471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.291704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.291743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.291965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.292007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.292266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.292306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.292449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.292491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.292740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.292888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.292930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.293141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.293181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.293373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.293697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.293737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.293971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.294012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.294154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.294194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.294345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.294386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.294609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.294649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.294870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.294911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.295123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.295163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.295359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.295416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.295629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.295669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.295920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.295963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.296232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.296449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.296489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.296770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.296810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.296970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.297011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.297217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.297257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.297444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.297791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.297845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.298056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.298244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.298283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.300031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.300093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.300425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.300467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.300663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.300705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.300917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.300958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.394 [2024-12-10 00:17:34.301117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.394 [2024-12-10 00:17:34.301158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.394 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.301363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.301404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.301626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.301665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.301858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.301899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.302183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.302223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.302452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.302492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.302684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.302725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.303031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.303109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.303329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.303374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.303628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.303669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.303933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.303975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.304138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.304178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.304389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.304429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.304713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.304752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.304961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.305002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.305163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.305203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.305340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.305380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.305526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.305565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.305776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.305815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.306032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.306072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.306219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.306267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.306528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.306569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.306773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.306813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.307068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.307109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.307234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.307274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.307418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.307457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.307645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.307929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.307969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.308178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.308217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.308444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.308484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.308734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.308774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.308921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.308960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.309113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.309153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.309417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.309458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.309615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.309656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.311490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.311552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.311864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.311909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.395 qpair failed and we were unable to recover it. 00:35:50.395 [2024-12-10 00:17:34.312109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.395 [2024-12-10 00:17:34.312150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.312384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.312424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.312648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.312690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.313001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.313041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.313356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.313396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.313594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.313811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.313860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.314107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.314147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.314312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.314353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.314505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.314545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.314745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.314785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.315023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.315065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.315297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.315337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.315472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.315511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.315727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.315765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.315990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.316030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.316291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.316331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.316631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.316671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.316902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.316944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.317152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.317192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.317450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.317490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.317686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.317726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.317921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.317962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.318168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.318214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.318421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.318461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.318740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.318780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.319071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.319112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.319250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.319290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.319435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.319475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.319601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.319641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.396 [2024-12-10 00:17:34.319776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.396 [2024-12-10 00:17:34.319816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.396 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.320044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.320310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.320349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.320564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.320604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.320835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.320878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.321132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.321172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.321323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.321363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.321561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.321602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.321809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.321892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.322099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.322139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.322385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.322521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.322561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.322761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.322813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.323060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.323102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.323295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.323335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.323478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.323518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.323710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.323749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.323991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.324033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.324224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.324264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.324402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.324441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.324633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.324710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.325004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.325300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.325344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.325497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.325538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.325854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.325897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.326035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.326075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.326270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.326311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.326539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.326579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.326850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.326892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.327018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.327058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.327322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.327362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.327501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.327541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.327682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.327937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.327978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.328112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.328153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.328413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.328453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.328602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.328642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.328924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.328965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.329170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.329209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.397 [2024-12-10 00:17:34.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.397 [2024-12-10 00:17:34.329393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.397 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.329531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.329572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.329798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.329854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.330113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.330154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.330280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.330319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.330462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.330502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.330626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.330665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.330854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.330895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.331096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.331142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.331401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.331441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.331683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.331809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.332013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.332053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.332188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.332229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.332361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.332401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.332656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.332696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.332859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.332903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.333102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.333142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.333355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.333395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.333603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.333644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.333798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.333849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.334052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.334092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.334332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.334372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.334519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.334560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.334751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.334791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.334943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.334983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.335196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.335235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.335491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.335531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.335696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.335735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.335943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.335984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.336143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.336183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.336423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.336463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.336683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.336723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.336923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.336964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.337093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.337133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.337291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.337337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.337643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.337684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.337832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.337872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.338010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.338050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.338255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.338295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.398 qpair failed and we were unable to recover it. 00:35:50.398 [2024-12-10 00:17:34.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.398 [2024-12-10 00:17:34.338488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.338679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.338719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.338949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.338990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.339128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.339167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.339367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.339407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.339621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.339661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.339792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.339857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.340054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.340094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.340246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.340286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.340458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.340499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.340690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.340729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.340935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.341235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.341275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.341503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.341543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.341858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.341899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.342040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.342081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.342340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.342379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.342513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.342744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.342784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.342926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.342965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.343123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.343164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.343366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.343406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.343598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.343639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.343788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.343838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.343983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.344023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.344173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.344213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.344386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.344588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.344628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.344819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.344887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.345025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.345274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.345314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.345452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.345508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.345711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.345751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.345906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.345948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.346169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.346209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.346469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.346656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.346697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.346906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.347161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.347201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.347403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.399 [2024-12-10 00:17:34.347443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.399 qpair failed and we were unable to recover it. 00:35:50.399 [2024-12-10 00:17:34.347581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.347621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.347814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.347865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.347997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.348037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.348233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.348273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.348417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.348457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.348720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.348760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.348902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.348942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.349072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.349112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.349316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.349356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.349615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.349655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.349853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.349894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.350086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.350126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.350267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.350519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.350559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.350772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.350813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.351016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.351056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.351262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.351302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.351505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.351545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.351701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.351740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.351937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.351979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.352126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.352166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.352358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.352398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.352530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.352569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.352774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.352820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.353074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.353115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.353325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.353365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.353524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.353563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.353700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.353740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.353913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.353954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.354086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.354126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.354274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.354314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.354522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.354563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.354779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.354818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.354965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.355005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.400 [2024-12-10 00:17:34.355202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.400 [2024-12-10 00:17:34.355242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.400 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.355479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.355520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.355676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.355716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.355943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.355984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.356111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.356151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.356280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.356321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.356444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.356483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.356726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.356766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.356987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.357028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.357150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.357189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.357326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.357366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.357613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.357803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.357852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.357994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.358034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.358235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.358275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.358468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.358508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.358710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.358756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.359020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.359061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.359201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.359240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.359379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.359419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.359596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.359800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.359854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.360106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.360332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.360372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.360518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.360558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.360693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.360733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.360925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.361091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.361132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.361268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.361308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.361498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.361537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.361784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.362072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.362151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.362322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.362368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.362509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.362549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.362746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.362787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.363001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.363043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.363198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.363472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.363512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.363703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.363744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.363959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.401 [2024-12-10 00:17:34.364000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.401 qpair failed and we were unable to recover it. 00:35:50.401 [2024-12-10 00:17:34.364208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.364248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.364556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.364597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.364732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.364772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.365072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.365281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.365321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.365456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.365496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.365697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.365738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.365998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.366039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.366190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.366231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.366369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.366561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.366600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.366793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.366841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.367055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.367095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.367270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.367486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.367526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.367732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.367772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.367934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.367975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.368188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.368228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.368537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.368578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.368732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.368772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.368917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.368958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.369232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.369273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.369424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.369464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.369616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.369785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.369838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.370086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.370226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.370265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.370420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.370461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.370766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.371050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.371091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.371300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.371341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.371489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.371530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.371676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.371716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.371854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.371896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.372043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.372644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.372684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.372843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.372886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.373149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.402 [2024-12-10 00:17:34.373189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.402 qpair failed and we were unable to recover it. 00:35:50.402 [2024-12-10 00:17:34.373338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.373379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.373508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.373548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.373758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.373799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.374012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.374052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.374245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.374510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.374551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.374754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.374794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.374939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.374979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.375184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.375225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.375369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.375410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.375693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.375733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.375927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.375969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.376162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.376370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.376410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.376556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.376597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.376785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.377002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.377043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.377258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.377298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.377539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.377579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.377721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.377761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.377947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.377990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.378455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.378495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.378760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.379100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.379140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.379349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.379390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.379594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.379857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.379898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.380039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.380079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.380265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.380411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.380451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.380664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.380705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.380910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.380953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.381217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.381423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.381464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.381603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.381643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.381776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.381816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.381964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.382005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.382146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.403 [2024-12-10 00:17:34.382187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.403 qpair failed and we were unable to recover it. 00:35:50.403 [2024-12-10 00:17:34.382338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.382379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.382577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.382618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.382759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.382799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.382950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.382990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.383221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.383261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.383483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.383529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.383743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.383784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.383971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.384179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.384218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.384502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.384542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.384681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.384721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.384910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.384951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.385144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.385184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.385378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.385419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.385564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.385603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.385744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.385784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.385946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.385987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.386180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.386221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.386427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.386618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.386659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.386806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.386881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.387009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.387049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.387181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.387221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.387500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.387541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.387739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.387779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.388007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.388048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.388193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.388234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.388462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.388503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.388713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.388753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.388922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.388964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.389123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.389163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.389384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.389424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.389754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.389965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.390007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.390292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.390332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.390605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.390784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.390834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.390964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.391004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.391151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.391192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.404 [2024-12-10 00:17:34.391390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.404 [2024-12-10 00:17:34.391431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.404 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.391592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.391633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.391773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.391814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.392056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.392097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.392249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.392290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.392528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.392719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.392765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.393014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.393055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.393251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.393291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.393513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.393552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.393745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.393785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.393995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.394035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.394238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.394278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.394480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.394521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.394741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.394782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.395068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.395109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.395312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.395493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.395533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.395672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.395712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.395860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.395902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.396110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.396150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.396383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.396595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.396635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.396842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.396884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.397023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.397063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.397189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.397229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.397363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.397403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.397620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.397661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.397873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.397914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.398421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.398461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.398670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.398710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.398850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.398891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.399198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.399341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.405 [2024-12-10 00:17:34.399380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.405 qpair failed and we were unable to recover it. 00:35:50.405 [2024-12-10 00:17:34.399522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.399563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.399808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.399863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.400072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.400112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.400333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.400675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.400715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.400861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.400902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.401146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.401351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.401392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.401607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.401647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.401803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.401851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.402065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.402105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.402315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.402361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.402509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.402548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.402752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.402792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.403032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.403073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.403284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.403324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.403609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.403650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.403880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.403922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.404074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.404114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.404322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.404362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.404507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.404546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.404753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.404793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.405013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.405053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.405277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.405317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.405520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.405561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.405772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.405814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.406068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.406279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.406319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.406649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.406689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.406973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.407016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.407216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.407255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.407381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.407422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.407655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.407695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.407858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.407899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.408124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.408165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.408328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.408370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.408565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.408606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.408833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.408875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.409088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.406 [2024-12-10 00:17:34.409128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.406 qpair failed and we were unable to recover it. 00:35:50.406 [2024-12-10 00:17:34.409320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.409360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.409512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.409552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.409768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.409808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.410032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.410072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.410381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.410421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.410647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.410687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.410901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.410942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.411222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.411472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.411512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.411650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.411691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.411843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.411885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.412023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.412062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.412354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.412401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.412611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.412652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.412864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.412905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.413186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.413226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.413479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.413520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.413659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.413699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.413905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.413946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.414204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.414245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.414444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.414484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.414789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.414863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.415018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.415059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.415202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.415242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.415447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.415487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.415760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.415989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.416031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.416287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.416327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.416565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.416606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.416868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.417221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.417261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.417541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.417583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.417785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.417832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.418115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.418156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.418355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.418397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.418555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.418595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.418808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.418860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.418995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.419037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.419187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.407 [2024-12-10 00:17:34.419227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.407 qpair failed and we were unable to recover it. 00:35:50.407 [2024-12-10 00:17:34.419502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.419542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.419670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.419711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.419924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.419966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.420229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.420270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.420408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.420449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.420754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.420795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.421018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.421059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.421273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.421314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.421542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.421582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.421721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.422000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.422043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.422273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.422313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.422456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.422497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.422648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.422695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.422894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.422936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.423235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.423372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.423412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.423646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.423686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.423949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.423992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.424174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.424313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.424551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.424591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.424807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.424859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.425010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.425050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.425308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.425348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.425565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.425605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.425731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.425771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.425984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.426026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.426222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.426263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.426527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.426567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.426707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.426907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.426949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.427108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.427416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.427456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.427597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.427638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.427849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.427892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.428105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.428281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.428321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.428580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.428621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.408 [2024-12-10 00:17:34.428814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.408 [2024-12-10 00:17:34.428864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.408 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.429109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.429258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.429299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.429493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.429668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.429708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.429907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.429950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.430146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.430185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.430378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.430418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.430588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.430794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.430865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.431001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.431041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.431303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.431343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.431536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.431576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.431772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.431971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.432277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.432317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.432551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.432692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.432732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.432876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.432917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.433131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.433171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.433301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.433342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.433540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.433580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.433724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.433764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.433975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.434016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.434159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.434394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.434434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.434740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.434780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.434946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.434987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.435147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.435736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.435776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.435913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.435954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.436127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.436398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.436438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.436697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.436947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.409 [2024-12-10 00:17:34.436988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.409 qpair failed and we were unable to recover it. 00:35:50.409 [2024-12-10 00:17:34.437207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.437248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.437515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.437555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.437693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.437993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.438034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.438189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.438228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.438430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.438471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.438667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.438709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.438970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.439014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.439226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.439265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.439393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.439431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.439713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.439754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.439905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.439950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.440148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.440188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.440323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.440363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.440495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.440534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.440747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.440787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.441016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.441056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.441201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.441241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.441441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.441481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.441769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.441815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.442084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.442124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.442321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.442362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.442559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.442753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.442793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.442945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.442986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.443193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.443450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.443490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.443672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.443886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.443928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.444063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.444103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.444383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.444423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.444645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.444685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.444994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.445035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.445182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.445222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.445371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.445410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.445736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.445777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.446001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.446042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.446336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.446376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.446705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.410 [2024-12-10 00:17:34.446746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.410 qpair failed and we were unable to recover it. 00:35:50.410 [2024-12-10 00:17:34.446960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.447001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.447197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.447237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.447427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.447467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.447678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.447718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.447926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.447968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.448122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.448162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.448441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.448481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.448678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.448719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.448922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.448963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.449171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.449211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.449425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.449465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.449726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.449765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.449911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.449953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.450239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.450279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.452138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.452201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.452483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.452721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.452952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.452995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.453261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.453301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.453512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.453552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.453813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.453872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.454077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af20 is same with the state(6) to be set 00:35:50.411 [2024-12-10 00:17:34.454400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.454480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.454645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.454688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.454902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.454946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.455146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.455187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.455435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.455595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.455635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.455849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.455891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.456030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.456070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.456216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.456257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.456389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.456429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.456637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.456677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.456886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.456927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.457069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.457120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.457445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.457485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.457715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.457756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.457919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.457962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.458160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.458200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.458395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.411 [2024-12-10 00:17:34.458435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.411 qpair failed and we were unable to recover it. 00:35:50.411 [2024-12-10 00:17:34.458643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.458683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.458896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.458938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.459137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.459177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.459411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.459451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.459604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.459645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.459905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.459947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.460150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.460191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.460336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.460377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.460668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.460709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.460857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.460899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.461119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.461159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.461389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.461430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.461631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.461671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.461880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.461921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.462074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.462115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.462257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.462297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.462436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.462476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.462634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.462675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.462804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.462873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.463078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.463312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.463353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.463491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.463532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.463681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.463721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.463982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.464024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.464165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.464206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.464335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.464375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.464567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.464607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.464804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.464855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.464998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.465038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.465165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.465205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.465404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.465556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.465597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.465724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.465764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.466032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.466074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.466209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.466256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.466468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.466508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.466769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.466809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.467063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.467104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.467296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.412 [2024-12-10 00:17:34.467336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.412 qpair failed and we were unable to recover it. 00:35:50.412 [2024-12-10 00:17:34.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.467509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.467712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.467753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.467912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.467955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.468178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.468219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.468361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.468401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.468633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.468856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.468899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.469106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.469146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.469273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.469313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.469520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.469561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.469833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.469874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.470113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.470154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.470350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.470389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.470525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.470565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.470691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.470731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.470942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.471197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.471238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.471384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.471424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.471565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.471605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.471818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.471867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.472006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.472046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.472244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.472284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.472433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.472474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.472604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.472645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.472773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.472813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.473029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.473069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.473332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.473373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.473513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.473552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.473700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.473740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.473932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.473975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.474170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.474446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.474486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.474736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.474776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.475008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.475049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.475251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.475291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.475482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.475530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.475682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.413 [2024-12-10 00:17:34.475722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.413 qpair failed and we were unable to recover it. 00:35:50.413 [2024-12-10 00:17:34.475883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.475926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.476172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.476307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.476348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.476538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.476578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.476777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.476953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.476993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.477273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.477314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.477469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.477510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.477634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.477675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.477885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.477927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.478124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.478164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.478380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.478420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.478584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.478625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.478834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.478876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.479099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.479139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.479336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.479532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.479572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.479778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.479818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.479982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.480022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.480164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.480205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.480354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.480395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.480587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.480628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.480849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.480891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.481073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.481203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.481242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.481425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.481504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.481676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.481721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.481946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.481989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.482125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.482166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.482329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.482369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.482661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.482701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.482900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.482942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.483089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.483129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.483282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.483547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.483587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.483743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.483782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.414 [2024-12-10 00:17:34.483926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.414 [2024-12-10 00:17:34.483967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.414 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.484159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.484201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.484343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.484392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.484609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.484650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.484778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.484818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.485094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.485135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.485272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.485312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.485516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.485557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.485791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.485842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.485997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.486038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.486164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.486205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.486506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.486545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.486702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.486941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.486984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.487182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.487362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.487534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.487574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.487776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.487817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.487966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.488007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.488141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.488181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.488395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.488435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.488629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.488869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.488911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.489108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.489148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.489409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.489449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.489645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.489685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.489842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.489883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.490073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.490267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.490306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.490568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.490646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.490883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.490928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.491096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.491136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.491437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.491478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.491710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.491750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.492081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.492230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.492270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.492460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.492500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.492709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.492749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.492898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.492940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.415 qpair failed and we were unable to recover it. 00:35:50.415 [2024-12-10 00:17:34.493169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.415 [2024-12-10 00:17:34.493208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.493427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.493467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.493676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.493718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.493945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.493995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.494225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.494265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.494471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.494511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.494716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.494755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.494977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.495018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.495174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.495413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.495453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.495591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.495631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.495770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.495810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.496077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.496117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.496321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.496360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.496564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.496603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.496809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.496863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.496997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.497038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.497326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.497366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.497655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.497695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.497891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.497933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.498147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.498186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.498394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.498434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.498577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.498617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.498871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.499175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.499215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.499440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.499480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.499675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.499714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.499912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.499953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.500213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.500253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.500485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.500703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.500745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.500909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.500950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.501105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.501145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.501352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.501390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.501597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.501883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.501926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.502219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.502260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.502469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.502508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.502753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.502792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.416 qpair failed and we were unable to recover it. 00:35:50.416 [2024-12-10 00:17:34.503014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.416 [2024-12-10 00:17:34.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.503312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.503353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.503543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.503738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.503779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.503993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.504040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.504190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.504229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.504419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.504458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.504659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.504698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.504957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.504997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.505229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.505268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.505541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.505581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.505813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.505867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.506072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.506111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.506315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.506355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.506545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.506585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.506850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.506891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.507102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.507141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.507345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.507385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.507532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.507570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.507784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.507835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.508095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.508136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.508353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.508392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.508539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.508577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.508794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.509073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.509113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.509266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.509304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.509561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.509601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.509812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.509880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.510159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.510200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.510424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.510463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.510593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.510633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.510903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.510945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.511150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.511189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.511484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.511524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.511796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.512062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.512102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.512339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.512378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.512687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.512949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.512991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.417 [2024-12-10 00:17:34.513194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.417 [2024-12-10 00:17:34.513233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.417 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.513464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.513717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.513757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.513978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.514278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.514317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.514518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.514564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.514691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.514731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.514881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.514921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.515128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.515167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.515456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.515496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.515711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.515752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.515912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.515952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.516209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.516249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.516449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.516488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.516749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.516789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.517071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.517112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.517309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.517551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.517591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.517883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.517924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.518087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.518127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.518388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.518429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.518634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.518674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.518974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.519015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.519244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.519284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.519428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.519468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.519734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.519994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.520034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.520291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.520331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.418 [2024-12-10 00:17:34.520478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.418 [2024-12-10 00:17:34.520518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.418 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.520770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.521059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.521100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.521375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.521415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.521633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.521673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.521948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.521989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.522118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.522157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.522433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.522473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.522665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.522705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.522967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.523011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.523163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.523202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.523467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.523508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.523737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.523777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.523980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.524021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.524223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.524478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.524518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.524728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.524767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.525035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.525076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.525304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.525344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.525613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.525653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.525978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.526257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.526297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.526596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.526720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.526760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.526963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.527004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.527156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.527195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.527425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.527464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.527618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.527803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.527852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.528065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.528106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.528268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.528307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.528456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.528496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.528794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.529066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.529107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.529260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.529300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.529434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.529472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.529680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.529719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.529933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.529973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.530166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.530205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.419 [2024-12-10 00:17:34.530358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.419 [2024-12-10 00:17:34.530398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.419 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.530553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.530594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.530848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.530890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.531116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.531156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.531349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.531544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.531589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.531799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.532064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.532104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.532313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.532352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.532482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.532522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.532677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.532717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.532957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.532997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.533160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.533481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.533521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.533838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.533879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.534142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.534182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.534312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.534352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.534563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.534604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.534863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.534904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.535114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.535155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.535360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.535399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.535616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.535656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.535918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.535959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.536149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.536419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.536459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.536620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.536659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.536800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.536849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.537046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.537291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.537331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.537459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.537499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.537692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.537732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.538016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.538057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.538339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.538380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.538585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.538624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.538846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.538889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.420 [2024-12-10 00:17:34.539058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.420 [2024-12-10 00:17:34.539097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.420 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.539330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.539370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.539526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.539566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.539840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.539881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.540076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.540117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.540327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.540367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.540628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.540947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.540988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.541198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.541238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.541485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.541524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.541877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.542092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.542132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.542419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.542460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.542653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.542693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.542923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.542964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.543165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.543206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.543411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.543451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.543597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.543636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.543757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.543796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.544050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.544091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.544379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.544420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.544724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.544765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.545009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.545050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.545258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.545298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.545565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.545606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.545871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.545912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.546121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.546160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.546358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.546398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.546690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.546730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.546966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.547007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.547287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.547328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.547606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.547647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.547857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.547897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.548104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.548428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.548469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.548763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.548802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.549119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.549159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.549426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.549467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.549708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.421 qpair failed and we were unable to recover it. 00:35:50.421 [2024-12-10 00:17:34.549924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.421 [2024-12-10 00:17:34.549964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.550125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.550165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.550369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.550409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.550690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.550730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.550889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.550930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.551159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.551199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.551459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.551499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.551977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.552018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.552236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.552277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.552472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.552511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.552798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.552852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.552993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.553034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.553242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.553281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.553555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.553594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.553788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.553850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.554065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.554105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.554379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.554418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.554631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.554671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.554819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.554870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.555111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.555151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.555351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.555392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.555582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.555622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.555881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.555922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.556185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.556225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.556514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.556555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.556766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.556806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.557096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.557136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.557336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.557376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.557597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.557851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.557892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.558190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.558338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.558378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.558679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.558973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.422 [2024-12-10 00:17:34.559014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.422 qpair failed and we were unable to recover it. 00:35:50.422 [2024-12-10 00:17:34.559241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.559281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.559436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.559476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.559632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.559671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.559983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.560175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.560215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.560428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.560468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.560796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.560935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.560974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.561235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.561498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.561538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.561742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.561782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.562084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.562125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.562316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.562356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.562631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.562671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.563013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.563326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.563365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.563570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.563616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.563877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.563918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.564212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.564251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.564535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.564576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.564769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.564809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.565042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.565082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.565292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.565332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.565521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.565781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.566022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.566062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.566266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.566306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.566538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.566577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.566771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.566811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.567017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.567058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.567257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.567297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.567555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.567594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.423 qpair failed and we were unable to recover it. 00:35:50.423 [2024-12-10 00:17:34.567797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.423 [2024-12-10 00:17:34.567847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.568067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.568107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.568366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.568406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.568627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.568668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.568923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.568965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.569226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.569266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.569474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.569707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.569746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.570083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.570125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.570333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.570373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.570622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.570911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.570952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.571218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.571258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.571413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.571453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.571716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.571960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.572148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.572187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.572449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.572489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.572701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.572741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.572956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.572998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.573253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.573293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.573499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.573538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.573855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.573897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.574112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.574152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.574432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.574659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.574889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.574930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.575189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.575229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.575416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.575608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.575648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.575928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.575968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.576175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.576215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.576424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.576463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.576688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.576728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.576878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.577131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.577171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.577368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.577408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.577667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.577707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.577880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.424 [2024-12-10 00:17:34.577920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.424 qpair failed and we were unable to recover it. 00:35:50.424 [2024-12-10 00:17:34.578072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.578110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.578408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.578632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.578671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.578930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.578971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.579210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.579249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.579531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.579571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.579803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.579852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.580066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.580105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.580380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.580421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.580678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.580719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.580857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.580897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.581111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.581151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.581310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.581637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.581676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.581816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.581885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.582101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.582140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.582345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.582385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.582644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.582683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.582951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.583144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.583184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.583385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.583425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.583638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.583938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.583979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.584184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.584224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.584431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.584471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.584681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.584728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.584940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.584980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.585189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.585228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.585488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.585528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.585667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.585706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.585853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.586188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.586227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.586371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.586410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.586708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.586747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.586978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.425 [2024-12-10 00:17:34.587019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.425 qpair failed and we were unable to recover it. 00:35:50.425 [2024-12-10 00:17:34.587241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.587281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.587596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.587636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.587848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.587890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.588047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.588287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.588327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.588559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.588867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.588908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.589069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.589109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.589322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.589361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.589496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.589534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.589814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.589863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.590075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.590115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.590394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.590435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.590654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.590694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.590958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.590999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.591136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.591176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.591377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.591417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.591651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.591691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.591949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.592190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.592230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.592451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.592491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.592636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.592838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.592877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.593082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.593121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.593326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.593365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.593596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.593635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.593779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.593818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.594042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.594083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.594341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.594381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.594705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.594935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.594982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.595265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.595305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.595454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.595494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.595640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.595680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.595846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.595886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.596167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.596207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.596479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.596519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.596850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.597109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.426 [2024-12-10 00:17:34.597150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.426 qpair failed and we were unable to recover it. 00:35:50.426 [2024-12-10 00:17:34.597360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.597400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.597655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.597694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.597977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.598018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.598236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.598277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.598517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.598557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.598773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.598813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.599034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.599074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.599287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.599326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.599572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.599612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.599900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.599940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.600242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.600451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.600492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.600684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.600723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.600945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.600986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.601189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.601229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.601430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.601470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.601732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.601772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.602047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.602089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.602356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.602396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.602612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.602652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.602847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.602889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.603090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.603129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.603408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.603448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.603655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.603696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.604015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.604055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.604335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.604375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.604642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.604682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.604888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.604929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.605135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.605175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.605316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.605356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.605492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.605530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.605735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.605779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.606044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.606301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.427 [2024-12-10 00:17:34.606340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.427 qpair failed and we were unable to recover it. 00:35:50.427 [2024-12-10 00:17:34.606552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.606591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.606869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.606910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.607193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.607233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.607370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.607410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.607713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.607754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.607978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.608281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.608321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.608521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.608560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.608771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.608810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.609053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.609093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.609236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.609276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.609582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.609815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.609865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.610080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.610119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.610382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.610723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.610762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.610905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.610945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.611213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.611253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.611534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.611574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.611861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.611902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.612185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.612225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.612432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.612472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.612751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.612790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.612942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.612983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.613206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.613247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.613476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.613516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.613782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.613831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.614111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.614152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.614444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.614483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.614686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.614726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.614951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.614993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.615200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.615240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.615444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.615485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.615696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.615975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.616186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.616226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.616452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.616492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.616691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.616737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.617049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.428 [2024-12-10 00:17:34.617090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.428 qpair failed and we were unable to recover it. 00:35:50.428 [2024-12-10 00:17:34.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.617326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.617605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.617645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.617854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.617894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.618183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.618222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.618525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.618565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.618821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.619023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.619062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.619366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.619406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.619614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.619654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.619920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.619962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.620114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.620154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.620358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.620398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.620686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.620726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.620936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.620976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.621167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.621206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.621412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.621452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.621669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.621709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.621916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.621957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.622167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.622207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.622401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.622440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.622665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.622706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.622918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.622958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.623084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.623122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.623352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.623393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.623585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.623625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.623835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.623877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.624004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.624043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.624193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.624232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.624491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.624531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.624739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.624780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.625022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.625064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.625569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.625609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.625911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.626173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.626213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.626353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.626394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.626619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.626659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.626796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.626843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.627127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.627173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.627378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.627418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.627566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.429 [2024-12-10 00:17:34.627606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.429 qpair failed and we were unable to recover it. 00:35:50.429 [2024-12-10 00:17:34.627914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.627956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.628149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.628189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.628401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.628442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.628701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.629205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.629245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.629448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.629487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.629642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.629682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.629879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.629921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.630144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.630184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.630441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.630481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.630766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.630807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.631077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.631118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.631350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.631687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.631728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.632022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.632064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.632328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.632368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.632558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.632598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.632804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.632853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.633048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.633088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.633278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.633318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.633598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.633638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.633855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.633897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.634056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.634421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.634502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.634791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.634854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.635053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.635094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.635315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.635356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.635518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.635766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.635807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.636017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.636057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.636285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.636325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.636570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.636730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.636769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.637044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.637086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.637368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.637408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.637573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.637613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.637900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.637941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.638097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.638138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.638364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.638404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.638685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.638726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.639038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.639079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.430 [2024-12-10 00:17:34.639289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.430 [2024-12-10 00:17:34.639329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.430 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.639587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.639627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.639844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.639886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.640147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.640187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.640472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.640512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.640715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.640755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.640990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.641031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.641310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.641350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.641632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.641673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.641977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.642032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.642191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.642232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.642460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.642501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.642765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.642805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.643111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.643152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.643346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.643386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.643647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.643687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.643913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.643955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.644151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.644192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.644391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.644431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.644624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.644664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.644946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.644989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.645286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.645326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.645609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.645649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.645852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.645894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.646191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.646233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.646430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.646470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.646695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.646735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.647013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.647055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.647277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.647317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.647599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.647639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.647900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.647942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.648151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.648192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.648450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.648490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.648702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.648742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.648994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.649035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.649296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.649336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.649609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.649656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.649868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.649909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.650191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.650232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.650444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.650484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.650694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.650734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.650964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.651005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.651205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.651245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.431 qpair failed and we were unable to recover it. 00:35:50.431 [2024-12-10 00:17:34.651464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.431 [2024-12-10 00:17:34.651503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.651770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.651810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.652012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.652054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.652264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.652304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.652513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.652553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.652833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.652874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.653173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.653496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.653536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.653817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.653868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.654089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.654129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.654269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.654308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.654616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.654656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.654861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.654903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.655032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.655072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.655334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.655374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.655618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.655658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.655944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.655986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.656129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.656169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.656324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.656363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.656666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.656706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.656898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.656946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.657184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.657223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.657499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.657539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.657842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.657884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.658093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.658133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.658338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.658378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.658576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.658615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.658924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.658966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.659318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.659555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.659595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.659859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.659900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.660112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.660152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.660354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.660395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.660691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.660731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.660940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.660981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.661249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.661290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.432 qpair failed and we were unable to recover it. 00:35:50.432 [2024-12-10 00:17:34.661509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.432 [2024-12-10 00:17:34.661549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.661849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.661890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.662159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.662200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.662335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.662375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.662664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.662704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.662876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.662918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.663058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.663098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.663229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.663269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.663504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.663544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.663735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.664064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.664106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.664366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.664407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.664697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.664737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.665023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.665199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.665239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.665374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.665415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.665632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.665671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.665821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.666094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.666134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.666322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.666362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.666618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.666658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.666946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.666988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.667189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.667230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.667419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.667459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.667676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.667983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.668025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.668287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.668327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.668585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.668625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.668898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.668939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.669147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.669187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.669468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.669508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.433 [2024-12-10 00:17:34.669718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.433 [2024-12-10 00:17:34.669758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.433 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.669985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.670025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.670283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.670323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.670565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.670605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.670734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.670774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.671045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.671087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.671340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.671381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.671688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.671728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.671880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.672155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.672195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.672400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.672438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.672782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.673035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.673077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.673271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.673311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.673536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.673575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.673768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.674056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.674096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.674309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.674348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.674636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.674676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.674894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.674935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.675142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.675183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.675465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.675511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.675790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.675844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.676060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.676101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.676332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.676372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.676528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.676568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.676850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.676892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.677122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.677162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.677348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.677388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.677644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.677684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.677928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.677969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.678151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.678410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.678450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.678731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.678771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.679095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.679137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.679335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.679375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.679585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.679625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.679904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.679946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.680144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.680184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.434 [2024-12-10 00:17:34.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.434 [2024-12-10 00:17:34.680536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.434 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.680737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.680778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.681071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.681111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.681259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.681299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.681452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.681492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.681683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.681723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.682012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.682053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.682257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.682297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.682447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.682487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.682746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.682792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.683026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.683067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.683309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.683348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.683655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.683886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.683928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.684131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.684171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.684309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.684349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.684491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.684532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.684814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.684861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.685122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.685162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.685445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.685699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.685739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.685998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.686039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.686192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.686232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.686489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.686771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.686811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.687034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.687075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.687321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.687361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.687513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.687553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.687793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.687840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.688128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.688168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.688430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.688470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.688668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.688708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.688988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.689029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.689168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.689413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.689453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.689736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.689776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.690104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.690146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.690397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.690437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.690651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.690955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.435 [2024-12-10 00:17:34.690998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.435 qpair failed and we were unable to recover it. 00:35:50.435 [2024-12-10 00:17:34.691205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.691533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.691574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.691784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.691832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.692137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.692177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.692389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.692429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.692706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.692747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.692958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.692999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.693212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.693252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.693536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.693577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.693843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.693885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.694076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.694349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.694621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.694887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.694929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.695136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.695175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.695417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.695711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.695751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.695917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.695959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.696116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.696159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.696445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.696486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.696624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.696663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.696930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.696971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.697178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.697218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.697366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.697636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.697901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.697942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.698150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.698190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.698396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.698436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.698643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.698683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.698954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.698996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.699205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.699245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.699508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.699547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.699781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.436 [2024-12-10 00:17:34.699821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.436 qpair failed and we were unable to recover it. 00:35:50.436 [2024-12-10 00:17:34.700093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.700133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.700362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.700401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.700681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.700912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.700954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.701179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.701224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.701461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.701500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.701704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.701744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.702051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.702280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.702319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.702466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.702506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.702774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.702814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.703118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.703158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.703369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.703641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.703680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.703908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.703949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.704252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.704292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.704601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.704641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.704929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.704970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.705207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.705248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.705505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.705545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.705843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.705885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.706098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.706139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.706297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.706337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.706623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.706663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.706972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.707013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.707211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.707251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.707470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.707509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.707765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.707806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.708079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.708119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.708346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.708386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.708524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.708845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.709198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.709416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.709455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.709686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.709996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.710037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.710248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.710288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.710491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.710770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.710809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.437 [2024-12-10 00:17:34.711109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.437 [2024-12-10 00:17:34.711149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.437 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.711299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.711338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.711564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.711603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.711754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.711793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.712394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.712434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.712699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.712740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.712999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.713040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.713264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.713304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.713463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.713503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.713785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.713833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.714070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.714355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.714395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.714596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.714636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.714786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.714832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.715115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.715155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.715344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.715384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.715689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.715729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.716002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.716323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.716369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.716598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.716638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.716920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.716962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.717170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.717210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.717492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.717531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.717756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.717963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.718003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.718291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.718331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.718537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.718576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.718784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.718843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.719125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.719364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.719404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.719668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.719707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.719940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.719981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.720243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.720284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.720569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.720610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.720883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.720925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.721136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.721176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.721377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.721416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.721680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.721720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.721938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.438 [2024-12-10 00:17:34.721979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.438 qpair failed and we were unable to recover it. 00:35:50.438 [2024-12-10 00:17:34.722170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.722210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.722468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.722508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.722750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.722790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.723080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.723120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.723253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.723293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.723440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.723480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.723670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.723709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.723932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.723973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.724200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.724240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.724465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.724505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.724764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.724804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.725064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.725105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.725257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.725297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.725493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.725533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.725794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.725844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.726037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.726077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.726333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.726372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.726579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.726619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.726921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.726962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.727180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.727221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.727501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.727788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.727836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.728066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.728106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.728395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.728435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.728634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.728674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.728835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.728875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.729084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.729123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.729402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.729442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.729593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.729633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.729843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.730165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.730205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.730486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.730526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.730726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.730766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.731042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.731308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.731349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.731541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.731581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.731807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.732098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.732138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.732396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.732436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.439 [2024-12-10 00:17:34.732630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.439 [2024-12-10 00:17:34.732669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.439 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.732949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.732991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.733211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.733251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.733556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.733596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.733877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.733918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.734195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.734235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.734510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.734549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.734749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.734788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.735039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.735086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.735370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.735409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.735612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.735652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.735854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.735895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.736172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.736211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.736400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.736441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.736655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.736694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.736898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.736939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.737207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.737247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.737438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.737478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.737742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.737782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.738077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.738118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.738353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.738392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.738622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.738662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.738861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.739090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.739130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.739387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.739426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.739684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.739724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.740003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.740044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.740238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.740277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.740541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.740771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.740811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.741130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.741171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.741305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.741344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.741547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.741586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.741795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.741845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.742002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.742042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.742243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.742289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.742517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.742558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.742768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.742807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.440 qpair failed and we were unable to recover it. 00:35:50.440 [2024-12-10 00:17:34.743052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.440 [2024-12-10 00:17:34.743092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.743294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.743334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.743613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.743652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.743894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.743936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.744084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.744123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.744314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.744353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.744639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.744679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.744958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.744999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.745195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.745234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.745468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.745507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.745643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.745683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.745973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.746014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.746240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.746282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.746543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.746584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.746905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.746946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.747253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.747292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.747500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.747539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.747750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.747790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.747938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.747979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.748173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.748213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.748422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.748461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.748663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.748702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.748969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.749010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.749309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.749609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.749648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.749857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.749898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.750100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.750140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.750352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.750393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.750669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.750708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.750908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.750949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.751105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.751145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.751297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.751336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.751542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.751582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.751908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.441 [2024-12-10 00:17:34.751950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.441 qpair failed and we were unable to recover it. 00:35:50.441 [2024-12-10 00:17:34.752206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.752245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.752383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.752423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.752618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.752658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.752785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.752835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.753147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.753187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.753394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.753433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.753689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.753729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.753921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.753962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.754112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.754151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.754359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.754399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.754589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.754629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.754952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.754992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.755188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.755227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.755373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.755413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.755699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.755738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.755975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.756016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.756231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.756272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.756496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.756535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.756784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.756846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.757110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.757150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.757356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.757396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.757617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.757657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.757917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.757960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.758193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.758233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.758482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.758522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.758798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.758848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.759126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.759165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.759423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.759463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.759722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.759762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.759978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.760019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.760244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.760284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.760435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.760481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.760760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.760799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.761018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.761251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.761450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.761490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.761627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.761667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.761924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.761965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.762179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.762219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.442 [2024-12-10 00:17:34.762365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.442 [2024-12-10 00:17:34.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.442 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.762613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.762653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.762855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.762897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.763025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.763065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.763271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.763310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.763516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.763556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.763774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.763814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.764033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.764072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.764327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.764367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.764673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.764714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.764918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.764959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.765098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.765139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.765418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.765457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.765731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.765771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.766030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.766072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.766286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.766325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.766606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.766646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.766932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.766974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.767169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.767209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.767421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.767467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.767615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.767656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.767913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.767954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.768163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.768203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.768445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.768486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.768734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.768962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.769002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.769207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.769247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.769531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.769571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.769857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.769898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.770180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.770220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.770503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.770543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.770735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.770774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.771045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.771086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.771287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.771328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.771587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.771627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.771821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.771872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.772179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.772219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.772427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.772467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.772750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.772789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.773099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.773139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.443 qpair failed and we were unable to recover it. 00:35:50.443 [2024-12-10 00:17:34.773349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.443 [2024-12-10 00:17:34.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.773646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.773686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.773893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.773934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.774208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.774528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.774569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.774797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.775157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.775203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.775461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.775502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.775781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.775820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.776037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.776078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.776334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.776601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.776640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.776870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.776911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.777120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.777160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.777353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.777393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.777656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.777904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.777945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.778095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.778135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.778401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.778441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.778631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.778670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.779038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.779126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.779447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.779498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.779807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.779865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.780166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.780213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.780500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.780546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.780741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.780983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.781245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.781287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.781432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.781475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.781741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.781784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.781953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.781995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.782234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.782277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.782545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.782592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.782898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.782956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.783175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.783224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.783427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.783470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.783741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.783782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.784071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.784112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.784342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.784382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.444 [2024-12-10 00:17:34.784602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.444 [2024-12-10 00:17:34.784642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.444 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.784905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.784946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.785208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.785248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.785549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.785889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.785930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.786227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.786368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.786408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.786689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.786728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.787011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.787052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.787257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.787297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.787502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.787542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.787834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.787874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.788145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.788185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.788395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.788434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.788637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.788677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.788896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.788938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.789130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.789170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.789372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.789412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.789688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.789728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.789986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.790028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.790328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.790604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.790650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.790861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.790902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.791126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.791165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.791425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.791466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.791687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.791727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.791934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.791975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.792251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.792290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.792567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.792606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.792865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.792905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.793192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.793232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.793463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.793503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.793800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.793853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.794046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.794087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.794386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.794425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.445 [2024-12-10 00:17:34.794713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.445 [2024-12-10 00:17:34.794753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.445 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.794959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.795001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.795319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.795359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.795560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.795600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.795812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.795862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.796075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.796114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.796311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.796353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.796636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.796676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.796880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.796922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.797225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.797488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.797527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.797739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.797779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.798050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.798128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.798434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.798488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.798776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.798818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.799052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.799093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.799351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.799392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.799620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.799660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.799920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.799962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.800212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.800532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.800572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.800843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.800885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.801148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.801189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.801413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.801454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.801601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.801642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.801843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.801884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.802141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.802182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.802407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.802448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.802698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.802737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.802953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.802994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.803196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.803236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.803444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.803764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.803805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.804028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.804069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.804351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.804390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.804646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.804847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.804891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.805099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.805138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.805352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.805392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.805603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.446 [2024-12-10 00:17:34.805643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.446 qpair failed and we were unable to recover it. 00:35:50.446 [2024-12-10 00:17:34.805910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.805951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.806168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.806208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.806486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.806527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.806853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.806895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.807138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.807179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.807410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.807450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.807683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.807723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.807930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.807972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.808119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.808159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.808434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.808474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.808702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.808742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.808964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.809005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.809200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.809241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.809722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.809762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.809909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.809951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.810273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.810523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.810564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.810769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.810810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.811052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.811093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.811292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.811332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.811523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.811563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.811753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.811793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.812098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.812140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.812365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.812405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.812695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.812735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.812996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.813038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.813329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.813370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.813506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.813546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.813751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.813792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.814053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.814316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.814355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.814644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.814684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.814995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.815037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.815245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.815285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.815520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.815560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.815766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.815806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.815998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.447 [2024-12-10 00:17:34.816231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.447 qpair failed and we were unable to recover it. 00:35:50.447 [2024-12-10 00:17:34.816510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.816550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.816696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.816737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.816937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.816979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.817209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.817248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.817474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.817515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.817776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.817816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.818019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.818059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.818348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.818388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.818529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.818775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.818815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.819069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.819110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.819344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.819385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.819680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.819720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.819992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.820033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.820189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.820229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.820472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.820512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.820779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.820819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.820968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.821008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.821320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.821359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.821659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.821699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.821979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.822021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.822177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.822216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.822413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.822453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.822663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.822704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.822983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.823235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.823275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.823513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.823796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.823843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.823993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.824034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.824245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.824286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.824582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.824743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.824783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.825049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.825090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.825242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.825281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.825473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.825513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.825815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.825867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.826079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.826119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.826319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.826456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.826496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.448 qpair failed and we were unable to recover it. 00:35:50.448 [2024-12-10 00:17:34.826731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.448 [2024-12-10 00:17:34.826772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.826999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.827235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.827281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.827434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.827773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.828002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.828044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.828323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.828363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.828630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.828670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.828975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.829017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.829153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.829193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.829475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.829731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.830085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.830126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.830425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.449 [2024-12-10 00:17:34.830464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.449 qpair failed and we were unable to recover it. 00:35:50.449 [2024-12-10 00:17:34.830675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.830937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.830978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.831246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.831286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.831579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.831848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.831889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.832115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.832154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.832412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.832453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.832643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.832683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.832835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.832877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.833069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.833109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.833390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.833430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.833690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.833730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.833937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.833979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.834244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.834284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.834492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.834532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.834798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.834859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.835171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.835212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.742 qpair failed and we were unable to recover it. 00:35:50.742 [2024-12-10 00:17:34.835359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.742 [2024-12-10 00:17:34.835398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.835634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.835674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.836024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.836256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.836550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.836590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.836873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.836913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.837123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.837163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.837311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.837611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.837651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.837898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.838215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.838409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.838455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.838665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.838705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.838975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.839016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.839249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.839289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.839500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.839540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.839835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.839877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.840113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.840153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.840411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.840451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.840747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.840787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.841109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.841433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.841473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.841629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.841669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.841883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.841924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.842132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.842172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.743 [2024-12-10 00:17:34.842369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.743 [2024-12-10 00:17:34.842410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.743 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.842621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.842661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.842931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.842973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.843169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.843210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.843498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.843538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.843741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.843781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.844065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.844107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.844253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.844293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.844605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.844644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.844916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.844958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.845216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.845256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.845481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.845520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.845770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.845810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.846049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.846090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.846286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.846326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.846613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.846653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.846890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.846933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.847145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.847186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.847379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.847419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.847652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.847692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.847878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.847920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.848082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.848122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.848404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.848443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.848702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.848742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.849035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.849076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.849336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.849376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.849580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.744 [2024-12-10 00:17:34.849929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.744 [2024-12-10 00:17:34.849971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.744 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.850273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.850518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.850557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.850814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.850865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.851159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.851199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.851480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.851519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.851711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.851751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.852031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.852073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.852218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.852258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.852534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.852575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.852714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.852755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.853070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.853111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.853265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.853305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.853539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.853581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.853775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.853815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.854017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.854058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.854264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.854305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.854567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.854758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.745 [2024-12-10 00:17:34.854798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.745 qpair failed and we were unable to recover it. 00:35:50.745 [2024-12-10 00:17:34.855088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.855129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.855398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.855437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.855629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.855669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.855906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.855948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.856177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.856217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.856441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.856481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.856746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.856786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.857000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.857042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.857328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.857368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.857683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.857723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.857937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.857978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.746 [2024-12-10 00:17:34.858236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.746 [2024-12-10 00:17:34.858276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.746 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.858483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.858523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.858782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.859035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.859075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.859266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.859306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.859434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.859474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.859626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.859666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.859900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.859941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.860216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.860256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.860562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.860608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.860747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.860787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.861051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.861302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.861343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.861535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.861574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.861803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.861852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.862045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.862086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.862355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.862395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.862543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.862583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.862779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.862820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.863117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.863157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.863301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.863342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.863625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.863665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.863938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.863980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.864192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.864232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.864442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.864482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.864760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.864800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.865092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.865133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.865335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.865375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.865542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.865847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.865889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.866097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.866138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.866445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.866485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.866741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.866953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.866994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.867269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.867584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.867624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.867912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.867954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.868185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.868226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.868370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.868410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.868547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.868587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.868821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.869152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.869192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.869385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.869425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.869688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.869728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.869989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.870220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.870505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.870545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.870773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.870813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.870996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.871301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.871347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.871644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.871685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.871888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.871930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.747 [2024-12-10 00:17:34.872163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.747 [2024-12-10 00:17:34.872203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.747 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.872437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.872478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.872699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.872739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.872998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.873039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.873316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.873357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.873549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.873589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.873799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.873849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.874133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.874174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.874449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.874489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.874763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.874804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.875037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.875077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.875310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.875350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.875543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.875583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.875867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.875909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.876111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.876151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.876410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.876741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.876781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.877074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.877115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.877323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.877363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.877643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.877684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.877959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.878001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.878147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.878475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.878515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.878775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.878816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.879032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.879072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.879279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.879319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.879607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.879647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.879939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.879981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.880181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.880221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.880518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.880558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.880774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.880815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.881032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.881072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.881281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.881321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.881448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.881488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.881698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.881737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.881970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.882011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.882158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.882198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.882430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.882476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.748 [2024-12-10 00:17:34.882674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.748 [2024-12-10 00:17:34.882714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.748 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.882917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.882959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.883154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.883194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.883339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.883379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.883635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.883676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.883936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.883978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.884208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.884248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.884451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.884644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.884684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.884916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.884957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.885231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.885443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.885483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.885764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.885805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.886048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.886248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.886289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.886426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.886466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.886673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.886713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.886938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.886980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.887239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.887280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.887558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.887598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.887893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.887935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.888216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.888256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.888538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.888578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.888800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.888848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.889058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.889099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.889357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.889397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.889674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.889715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.889911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.889952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.890092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.890132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.890268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.890308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.890590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.890631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.890832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.890873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.891033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.891074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.891357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.891398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.891639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.891679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.891916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.891958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.892156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.892197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.892399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.892439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.892697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.892737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.892931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.892979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.893222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.893431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.893471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.893627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.893668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.893926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.893967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.894226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.894266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.894401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.894441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.894643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.894684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.894943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.894985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.895198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.895239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.895464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.895505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.895632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.895672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.895943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.895986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.896209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.896249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.896393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.896434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.896624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.896664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.896890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.896932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.897188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.897229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.749 qpair failed and we were unable to recover it. 00:35:50.749 [2024-12-10 00:17:34.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.749 [2024-12-10 00:17:34.897473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.897756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.897796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.898047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.898090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.898323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.898363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.898642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.898682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.898934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.899152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.899193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.899474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.899514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.899775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.899815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.899993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.900034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.900314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.900354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.900544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.900585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.900853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.900895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.901149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.901190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.901472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.901512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.901665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.901706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.901993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.902246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.902286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.902489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.902529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.902810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.903083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.903124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.903365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.903405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.903597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.903858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.903899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.904158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.904199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.904333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.904373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.904520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.904560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.904818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.904870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.905063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.905103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.905382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.905422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.905698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.905738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.905999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.906040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.906270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.906310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.906505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.906809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.906858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.907077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.907117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.907353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.907394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.907871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.907913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.908174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.908214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.908423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.908463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.908714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.908755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.909047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.909089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.909285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.909325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.909598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.909639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.909898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.909940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.910218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.910258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.910463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.910503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.910761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.910802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.910950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.910991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.911193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.911233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.911442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.911482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.911740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.911781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.912026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.750 [2024-12-10 00:17:34.912067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.750 qpair failed and we were unable to recover it. 00:35:50.750 [2024-12-10 00:17:34.912260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.912301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.912493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.912533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.912795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.912842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.913064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.913105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.913312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.913352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.913595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.913635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.913929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.913971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.914274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.914314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.914556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.914844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.914885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.915157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.915354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.915393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.915544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.915585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.915722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.915763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.916010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.916052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.916309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.916350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.916494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.916535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.916743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.916783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.917100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.917142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.917352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.917392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.917972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.918284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.918325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.918535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.918576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.918768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.918808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.919012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.919273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.919313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.919572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.919613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.919821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.919887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.920068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.920328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.920369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.920626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.920879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.920921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.921131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.921171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.921313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.921352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.921664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.921705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.921963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.922267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.751 [2024-12-10 00:17:34.922307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.751 qpair failed and we were unable to recover it. 00:35:50.751 [2024-12-10 00:17:34.922514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.922554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.922776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.922816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.923027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.923068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.923259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.923300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.923581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.923621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.923767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.923807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.923978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.924019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.924255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.924295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.924502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.924734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.924774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.924983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.925030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.925264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.925304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.925516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.925555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.925781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.925821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.926118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.926159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.926313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.926353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.926613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.926653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.926933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.926975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.927180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.927220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.927422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.927463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.927678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.927718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.928021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.928063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.928340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.928625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.928667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.928880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.928922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.929224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.929264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.929475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.929516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.929815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.930039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.930079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.930359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.930399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.930687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.930727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.930873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.930915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.931173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.931213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.931424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.931464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.931724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.931764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.932056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.932097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.932313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.932354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.932553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.932593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.932737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.932777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.933029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.933071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.933284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.933324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.933518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.933559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.933791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.933841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.934057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.934098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.934240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.934280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.934511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.934551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.934745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.934785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.935032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.752 [2024-12-10 00:17:34.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.752 qpair failed and we were unable to recover it. 00:35:50.752 [2024-12-10 00:17:34.935276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.935316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.935458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.935499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.935727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.935774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.936006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.936048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.936311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.936351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.936529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.936741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.936781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.937055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.937097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.937286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.937326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.937458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.937498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.937718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.937758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.937973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.938014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.938142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.938183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.938500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.938709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.938749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.938961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.939002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.939212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.939253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.939446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.939486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.939771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.939811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.940091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.940361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.940400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.940620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.940660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.940882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.940924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.941132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.941173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.941408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.941449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.941653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.941694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.941951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.941993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.942257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.942298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.942439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.942479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.942615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.942656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.942933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.942975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.943126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.943166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.943447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.943488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.943749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.943789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.944012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.944053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.944212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.944251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.944533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.944575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.944782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.944835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.945045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.945085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.945236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.945535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.945575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.945751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.945913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.945961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.946200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.946242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.946377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.946646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.946686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.946973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.947016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.947297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.947546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.947587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.947867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.947909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.948141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.948181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.948434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.948717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.948758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.948973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.949014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.949168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.949208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.949406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.949447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.949647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.949688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.949946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.949987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.950192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.950233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.950434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.950474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.950734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.951070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.951112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.951304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.951344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.951538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.951578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.951772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.951813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.952100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.952141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.952334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.952374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.952516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.952556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.952872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.952917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.953082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.753 [2024-12-10 00:17:34.953123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.753 qpair failed and we were unable to recover it. 00:35:50.753 [2024-12-10 00:17:34.953380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.953420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.953617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.953657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.953857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.953898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.954106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.954146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.954337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.954377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.954569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.954610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.954756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.954796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.955001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.955041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.955243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.955284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.955478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.955518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.955746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.955786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.955941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.955981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.956116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.956156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.956306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.956346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.956491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.956531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.956731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.956771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.957006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.957047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.957310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.957351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.957477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.957517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.957775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.957816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.958021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.958171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.958212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.958404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.958444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.958648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.958690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.958883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.958925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.959168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.959208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.959408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.959449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.959707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.959846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.959888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.960083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.960123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.960269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.960309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.960607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.960647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.960927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.960968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.961233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.961273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.961530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.961571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.961851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.961893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.962103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.962144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.962364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.962404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.962733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.962867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.963136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.963175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.963309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.963349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.963630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.963670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.963906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.964113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.964153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.964410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.964450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.964595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.964635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.964788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.965050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.965091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.965350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.965529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.965569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.965838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.965880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.966008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.966048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.966333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.966374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.966566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.966607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.966866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.966908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.967123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.967163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.967366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.967407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.967602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.967642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.967781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.967821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.968053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.968094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.968322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.968362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.968643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.968683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.968815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.968878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.969068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.969108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.969265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.969305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.969569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.969609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.969896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.969937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.754 qpair failed and we were unable to recover it. 00:35:50.754 [2024-12-10 00:17:34.970149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.754 [2024-12-10 00:17:34.970189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.970404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.970444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.970645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.970934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.971130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.971170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.971454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.971495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.971755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.971796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.972182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.972223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.972435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.972476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.972679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.972719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.972844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.972892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.973087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.973127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.973334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.973375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.973570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.973610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.973817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.973865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.974077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.974118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.974326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.974366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.974492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.974532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.974797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.974849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.975054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.975095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.975303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.975343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.975467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.975507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.975788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.975838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.976043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.976084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.976217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.976258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.976486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.976526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.976814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.976876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.977137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.977177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.977318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.977358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.977643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.977877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.977918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.978191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.978232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.978541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.978581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.978775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.978816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.979086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.979126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.979373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.979587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.979627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.979791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.979842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.980034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.980075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.980329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.980526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.980566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.980762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.981015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.981056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.981289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.981330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.981535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.981576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.981840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.982077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.982117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.982254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.982295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.982555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.982595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.982881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.982923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.983184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.983537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.983577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.983841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.983883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.984093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.984134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.984268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.984308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.984502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.984542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.984743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.984782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.985069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.985111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.985314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.985354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.985556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.985596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.985873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.985915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.986058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.986098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.986381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.986420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.986563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.986603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.986808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.986863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.987011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.987050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.987275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.987314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.987440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.987479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.987786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.987836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.987978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.988018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.988299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.988339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.988543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.988583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.988862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.989100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.755 [2024-12-10 00:17:34.989141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.755 qpair failed and we were unable to recover it. 00:35:50.755 [2024-12-10 00:17:34.989436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.989476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.989669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.989709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.989863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.989905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.990233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.990274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.990484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.990736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.990776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.991024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.991065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.991292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.991615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.991654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.991892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.991933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.992164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.992205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.992486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.992525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.992758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.992798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.993024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.993065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.993323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.993363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.993646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.993686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.993897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.993945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.994209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.994249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.994439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.994480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.994630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.994671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.994874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.994916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.995111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.995152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.995315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.995355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.995634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.995891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.995933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.996159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.996198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.996446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.996485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.996623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.996663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.996803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.996850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.997132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.997172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.997393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.997713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.997753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.997906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.997948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.998103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.998144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.998344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.998384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.998663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.998703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.998897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.998939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.999225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.999265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.999563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:34.999792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:34.999841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.000075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.000115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.000417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.000457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.000650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.000691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.000921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.000971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.001179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.001220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.001430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.001470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.001676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.001717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.002011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.002054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.002292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.002335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.002560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.002601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.002917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.002959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.003179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.003220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.003511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.003552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.003844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.003887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.004112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.004153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.004281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.004321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.004597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.004645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.004777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.004818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.005061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.005101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.005361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.005402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.005602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.005643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.005784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.005832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.006092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.006132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.006335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.006375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.006561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.006755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.007053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.007095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.007299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.007339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.007620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.007661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.007878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.007920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.008121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.008162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.008371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.008412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.008604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.008644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.008933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.008975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.009174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.756 [2024-12-10 00:17:35.009215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.756 qpair failed and we were unable to recover it. 00:35:50.756 [2024-12-10 00:17:35.009425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.009465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.009637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.009678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.009907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.009949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.010151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.010193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.010450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.010490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.010771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.010812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.010969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.011010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.011268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.011309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.011524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.011565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.011711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.011755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.011986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.012028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.012248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.012288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.012493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.012534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.012794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.012844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.013122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.013163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.013463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.013673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.013714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.013919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.013962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.014186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.014226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.014492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.014533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.014754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.014794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.015030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.015083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.015320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.015361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.015649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.015691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.015886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.015927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.016189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.016230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.016553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.016594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.016820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.016869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.017151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.017192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.017398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.017438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.017636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.017676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.017848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.017892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.018175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.018524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.018565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.018800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.018864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.019091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.019133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.019343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.019591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.019632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.019770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.019810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.020134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.020183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.020434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.020476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.020638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.020680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.020972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.021014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.021270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.021311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.021519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.021559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.021772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.021813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.022041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.022082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.022248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.022289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.022563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.022604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.022761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.022802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.023056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.023097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.023403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.023443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.023725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.023766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.024037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.024079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.024236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.024276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.024502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.024543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.024801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.024853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.025056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.025097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.025347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.025557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.025596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.025731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.025771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.025946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.025994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.026228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.026268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.026547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.026588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.026793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.026844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.026986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.027026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.027269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.027310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.027513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.027554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.027759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.027799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.757 [2024-12-10 00:17:35.028117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.757 [2024-12-10 00:17:35.028158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.757 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.028440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.028480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.028689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.028730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.028988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.029030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.029362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.029404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.029682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.029723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.029938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.029980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.030124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.030165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.030375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.030416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.030613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.030654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.030916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.030958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.031168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.031363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.031403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.031618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.031658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.031861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.031903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.032043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.032084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.032302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.032342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.032537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.032578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.032787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.032837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.033081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.033355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.033566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.033607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.033917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.033959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.034153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.034194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.034457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.034498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.034805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.034864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.035059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.035393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.035433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.035638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.035679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.035874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.035916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.036202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.036243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.036458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.036498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.036709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.036755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.037010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.037213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.037254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.037562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.037603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.037865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.037907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.038130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.038172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.038398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.038438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.038657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.038699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.038922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.038963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.039237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.039278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.039431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.039473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.039612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.039652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.039865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.039906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.040191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.040232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.040496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.040537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.040728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.040885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.040926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.041186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.041227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.041417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.041458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.041690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.041820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.041871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.042130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.042170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.042329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.042369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.042636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.042676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.042868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.042912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.043127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.043167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.043458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.043498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.043706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.043747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.043970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.044012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.044263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.044303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.044511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.044551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.044839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.044882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.045152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.045193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.045404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.045445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.045650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.045690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.048994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.049038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.049343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.049667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.049708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.049914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.049955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.050104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.050145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.050293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.050340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.050546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.758 [2024-12-10 00:17:35.050586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.758 qpair failed and we were unable to recover it. 00:35:50.758 [2024-12-10 00:17:35.050792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.050852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.051112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.051152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.051297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.051338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.051609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.051650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.051843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.051885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.052165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.052206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.052445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.052654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.052694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.052915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.052957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.053217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.053257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.053535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.053576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.053813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.054182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.054223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.054436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.054477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.054704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.054744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.055032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.055074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.055308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.055348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.055567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.055607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.055752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.055792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.056020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.056061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.056338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.056378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.056661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.056702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.056932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.056974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.057174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.057215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.057369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.057409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.057696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.057737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.057950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.057992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.058147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.058187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.058465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.058506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.058672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.058713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.058924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.058966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.059160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.059201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.059427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.059468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.059729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.059770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.059985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.060027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.060231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.060272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.060412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.060452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.060655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.060695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.060897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.061093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.061134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.061394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.061621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.061662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.061819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.061868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.062153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.062194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.062404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.062444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.062641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.062691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.063000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.063405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.063608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.063649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.063868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.063911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.064037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.064078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.064291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.064331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.064532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.064573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.064727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.064767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.064980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.065022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.065174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.065215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.065426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.065466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.065670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.065710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.065958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.066256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.066297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.066536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.066576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.066837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.066878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.067095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.067135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.067394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.067434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.067666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.067917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.067960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.068276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.068317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.068479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.068519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.068800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.068866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.069090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.759 [2024-12-10 00:17:35.069131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.759 qpair failed and we were unable to recover it. 00:35:50.759 [2024-12-10 00:17:35.069431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.069472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.069620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.069660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.069921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.069963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.070235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.070276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.070424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.070464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.070674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.070714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.070960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.071131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.071341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.071393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.071602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.071643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.071902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.071944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.072079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.072119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.072391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.072432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.072570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.072611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.072834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.072875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.073103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.073144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.073280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.073321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.073575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.073765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.073805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.073964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.074005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.074160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.074200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.074481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.074523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.074810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.074866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.075141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.075181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.075440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.075481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.075684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.075724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.076011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.076054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.076337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.076377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.076526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.076567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.076700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.076741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.076950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.076991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.077193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.077531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.077807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.077858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.078167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.078208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.078486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.078528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.078814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.078862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.079119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.079160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.079370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.079410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.079690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.079730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.079859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.079902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.080115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.080156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.080462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.080503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.080724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.080769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.081077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.081119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.081423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.081620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.081660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.081888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.081930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.082177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.082462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.082502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.082644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.082685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.082944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.082985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.083292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.083333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.083599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.083640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.083784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.083836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.084107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.084147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.084288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.084329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.084612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.084653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.084790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.084844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.085041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.085081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.085298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.085338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.085599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.085639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.085935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.085977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.086209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.086250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.086466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.086506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.086773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.086812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.086968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.087009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.087269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.087309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.087573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.087613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.087808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.088169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.088210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.088425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.088466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.088659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.088699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.088976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.089018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.089305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.760 [2024-12-10 00:17:35.089346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.760 qpair failed and we were unable to recover it. 00:35:50.760 [2024-12-10 00:17:35.089580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.089621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.089835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.089877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.090082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.090123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.090325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.090365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.090576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.090617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.090767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.090808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.091074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.091227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.091268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.091500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.091541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.091800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.091862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.092086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.092127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.092287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.092327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.092533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.092573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.092765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.092806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.093086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.093128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.093321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.093361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.093622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.093663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.093933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.093976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.094170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.094210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.094411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.094452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.094603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.094644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.094902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.094947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.095222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.095263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.095565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.095605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.095831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.095872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.096031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.096073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.096268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.096310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.096576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.096617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.096943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.097213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.097257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.097408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.097449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.097717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.097758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.097973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.098015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.098157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.098197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.098459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.098499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.098709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.098749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.098986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.099028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.099181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.099221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.099518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.099559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.099714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.099754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.099931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.099979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.100239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.100280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.100552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.100592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.100800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.100854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.101062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.101102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.101297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.101338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.101541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.101582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.101866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.101908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.102113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.102154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.102436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.102476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.102618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.102658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.102850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.102892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.103035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.103076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.103362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.103402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.103667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.103707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.103985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.104026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.104291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.104331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.104615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.104656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.104846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.104888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.105169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.105209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.105411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.105453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.105606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.105646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.105798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.105863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.106106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.106148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.106431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.106471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.106764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.106805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.107030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.107071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.107282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.107323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.107548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.107588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.107869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.761 [2024-12-10 00:17:35.107911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.761 qpair failed and we were unable to recover it. 00:35:50.761 [2024-12-10 00:17:35.108150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.108191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.108401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.108442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.108699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.108740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.108993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.109034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.109319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.109361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.109577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.109618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.109841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.109882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.110092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.110283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.110324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.110609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.110915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.110963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.111182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.111224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.111430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.111477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.111623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.111943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.111986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.112248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.112291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.112496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.112536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.112835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.112879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.113185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.113227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.113427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.113467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.113767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.113808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.114019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.114063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.114222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.114263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.114472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.114514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.114784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.114843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.115055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.115354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.115394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.115553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.115593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.115789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.116032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.116073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.116269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.116309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.116530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.116570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.116782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.116836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.117098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.117138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.117399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.117440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.117698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.117739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.118010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.118299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.118339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.118617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.118657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.118863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.118904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.119113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.119152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.119343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.119384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.119666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.119706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.119958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.120244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.120285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.120498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.120537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.120773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.121049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.121090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.121377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.121664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.121704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.121964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.122012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.122210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.122250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.122459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.122499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.122697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.122737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.123025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.123066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.123277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.123317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.123571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.123764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.123804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.124026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.124067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.124204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.124245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.124454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.124494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.124620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.124660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.124870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.124911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.125170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.125209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.125440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.125481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.125630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.125678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.125968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.126010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.126302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.126342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.126631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.126671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.126977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.127019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.127175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.127215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.127371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.127411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.127605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.127646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.127908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.762 [2024-12-10 00:17:35.127951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.762 qpair failed and we were unable to recover it. 00:35:50.762 [2024-12-10 00:17:35.128210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.128250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.128454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.128494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.128780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.129031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.129072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.129279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.129320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.129590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.129631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.129845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.129887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.130171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.130211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.130494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.130534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.130688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.130728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.130992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.131034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.131307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.131629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.131669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.131808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.131859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.131992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.132033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.132236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.132276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.132556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.132603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.132797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.132848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.133135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.133176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.133454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.133494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.133726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.133766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.133997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.134039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.134327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.134367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.134591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.134632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.134851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.134893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.135104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.135145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.135453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.135493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.135783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.135843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.136127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.136168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.136384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.136424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.136717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.136758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.137036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.137177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.137217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.137425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.137465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.137786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.138000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.138041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.138296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.138336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.138637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.138677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.138807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.138860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.139154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.139195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.139476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.139516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.139756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.139797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.140092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.140133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.140420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.140461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.140743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.140783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.140946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.140988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.141211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.141555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.141595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.141739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.141785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.142081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.142122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.142345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.142385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.142586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.142626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.142859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.143101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.143141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.143297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.143337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.143548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.143588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.143869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.143917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.144150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.144192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.144393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.144434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.144715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.144756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.145040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.145082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.145235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.145275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.145486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.145526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.145746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.145786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.146011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.146052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.146408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.146448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.146779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.146820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.147066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.147106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.147389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.147429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.147689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.147730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.147939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.147983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.148201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.148241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.148445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.148485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.148762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.763 [2024-12-10 00:17:35.148803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.763 qpair failed and we were unable to recover it. 00:35:50.763 [2024-12-10 00:17:35.149058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.149099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.149250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.149290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.149573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.149614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.149741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.149782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.149996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.150037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.150287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.150556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.150596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.150792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.150841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.151044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.151085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.151350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.151391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.151550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.151591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.151718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.151758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.152052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.152094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.152352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.152392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.152567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.152850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.152893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.153144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.153355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.153395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.153676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.153717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.153984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.154026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.154345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.154385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.154693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.155039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.155177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.155218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.155485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.155525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.155846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.155888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.156151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.156192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.156387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.156427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.156640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.156681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.156939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.156981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.157207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.157247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.157440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.157481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.157675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.157716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.157993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.158035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.158317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.158357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.158641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.158682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.158970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.159012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.159223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.159263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.159480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.159520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.159731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.159771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.159998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.160039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.160241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.160282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.160510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.160551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.160761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.160800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.160955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.160997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.161134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.161174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.161454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.161494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.161750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.161791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.161996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.162038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.162303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.162343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.162631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.162672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.162838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.162880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.163172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.163213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.163426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.163466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.163618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.163659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.163859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.163901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.164119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.164159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.164417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.164457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.164667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.164707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.164894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.165120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.165160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.165314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.165354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.165638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.165685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.165902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.165942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.166206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.166247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.166554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.166594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.166896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.166938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.167169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.167209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.167409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.167449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.764 [2024-12-10 00:17:35.167698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.764 qpair failed and we were unable to recover it. 00:35:50.764 [2024-12-10 00:17:35.167928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.167970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.168112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.168152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.168345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.168385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.168587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.168627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.168886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.169209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.169489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.169530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.169759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.169799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.170100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.170141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.170317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.170481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.170522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.170721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.170761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.171028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.171070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.171346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.171387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.171543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.171583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.171844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.172127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.172168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.172435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.172475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.172622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.172662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.172873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.172915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.173195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.173236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.173470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.173769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.173809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.174128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.174335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.174376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.174533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.174574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.174833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.174972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.175013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.175239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.175279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.175560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.175601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.175861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.175904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.176185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.176384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.176430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.176712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.176753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.176971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.177013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.177154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.177194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.177452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.177492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.177725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.177765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.177913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.177954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.178103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.178143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.178373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.178413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.178680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.178720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.179004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.179045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.179262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.179303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.179446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.179486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.179665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.179881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.179923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.180183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.180223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.180430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.180470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.180730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.180770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.180984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.181025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.181306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.181346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.181564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.181604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.181842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.181884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.182092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.182132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.182390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.182430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.182687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.182728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.182994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.183036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.183287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.183551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.183593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.183860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.183902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.184205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.184246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.184400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.184440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.184640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.184680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.184983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.185024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.185176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.185216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.185459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.185499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.185723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.186027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.186068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.186350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.186391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.186682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.186723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.186976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.187279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.187325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.187646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.765 qpair failed and we were unable to recover it. 00:35:50.765 [2024-12-10 00:17:35.187852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.765 [2024-12-10 00:17:35.187894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.766 qpair failed and we were unable to recover it. 00:35:50.766 [2024-12-10 00:17:35.188125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.766 [2024-12-10 00:17:35.188165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.766 qpair failed and we were unable to recover it. 00:35:50.766 [2024-12-10 00:17:35.188388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.766 [2024-12-10 00:17:35.188429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:50.766 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.188639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.188680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.188888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.188930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.189215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.189255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.189390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.189431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.189676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.189718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.189978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.190019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.190302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.190540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.190581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.190809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.190862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.191108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.191148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.191436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.191477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.191627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.191667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.191940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.191981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.192302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.192509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.192549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.192763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.192803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.193094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.193135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.193370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.042 [2024-12-10 00:17:35.193410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.042 qpair failed and we were unable to recover it. 00:35:51.042 [2024-12-10 00:17:35.193617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.193941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.193982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.194229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.194270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.194482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.194523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.194835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.194876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.195086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.195127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.195415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.195455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.195596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.195636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.195921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.195963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.196248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.196289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.196568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.196608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.196806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.196857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.197177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.197465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.197505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.197648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.197688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.197947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.197989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.198223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.198416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.198462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.198724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.198764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.199001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.199042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.199252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.199293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.199576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.199616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.199837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.199879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.200079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.200119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.200338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.200664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.200704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.200932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.200973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.201099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.201140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.201280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.201321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.201601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.201642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.201898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.202141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.202181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.202375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.202416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.202678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.202718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.202916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.202958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.203242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.203283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.203488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.203529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.203813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.203861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.043 [2024-12-10 00:17:35.204005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.043 [2024-12-10 00:17:35.204045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.043 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.204270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.204311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.204514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.204554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.204814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.204865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.205013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.205054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.205337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.205629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.205670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.205898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.205940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.206172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.206212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.206423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.206463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.206765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.206805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.207085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.207126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.207419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.207460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.207670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.207711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.207918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.207960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.208247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.208288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.208483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.208524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.208670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.208711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.208921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.208963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.209175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.209221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.209365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.209406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.209611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.209653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.209894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.209973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.210154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.210196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.210423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.210465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.210662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.210702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.210843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.210885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.211118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.211159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.211449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.211488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.211796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.211844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.212055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.212096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.212310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.212351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.212652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.212936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.212978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.213266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.213510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.213550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.213864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.213906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.214188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.214229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.214515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.214555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.044 qpair failed and we were unable to recover it. 00:35:51.044 [2024-12-10 00:17:35.214708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.044 [2024-12-10 00:17:35.214748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.214980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.215021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.215176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.215216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.215425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.215465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.215680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.215720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.215931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.215972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.216203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.216244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.216455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.216496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.216774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.216814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.217014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.217150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.217189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.217397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.217437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.217634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.217675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.217970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.218013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.218244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.218285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.218445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.218485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.218756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.218797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.219103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.219145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.219305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.219345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.219546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.219586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.219790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.219846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.220055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.220096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.220248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.220288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.220485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.220524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.220718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.220910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.220952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.221205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.221245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.221457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.221497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.221719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.221760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.222032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.222074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.222264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.222304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.222563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.222603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.222866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.222908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.223111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.223151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.223424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.223466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.223725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.223766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.223970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.224011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.224270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.224311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.045 [2024-12-10 00:17:35.224559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.045 [2024-12-10 00:17:35.224600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.045 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.224889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.224931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.225176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.225404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.225444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.225654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.225694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.225889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.225932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.226191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.226231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.226424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.226464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.226602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.226643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.226857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.226899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.227158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.227199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.227377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.227575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.227615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.227819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.227868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.228057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.228098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.228378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.228418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.228626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.228666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.228966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.229008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.229214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.229254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.229506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.229547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.229882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.229923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.230145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.230185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.230397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.230443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.230726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.230765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.230933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.230975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.231131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.231172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.231318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.231358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.231548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.231850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.231892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.232182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.232222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.232356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.232396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.232542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.232583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.232782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.232832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.233074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.233115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.233401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.233441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.046 [2024-12-10 00:17:35.233717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.046 [2024-12-10 00:17:35.233756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.046 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.233980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.234022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.234291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.234332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.234534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.234575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.234865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.234906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.235056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.235096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.235305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.235345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.235645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.235866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.235908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.236055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.236096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.236306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.236347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.236550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.236590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.236728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.237060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.237101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.237314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.237360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.237555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.237595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.237867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.237909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.238120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.238160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.238445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.238485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.238678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.238719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.238990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.239031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.239180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.239220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.239363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.239402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.239663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.239703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.239915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.239957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.240165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.240205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.240488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.240529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.240801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.240850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.241131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.241171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.241404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.241445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.241709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.241749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.241964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.242167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.242207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.242371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.242412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.242612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.242652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.242857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.242899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.243173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.047 [2024-12-10 00:17:35.243457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.047 [2024-12-10 00:17:35.243497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.047 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.243769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.243809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.244038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.244079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.244285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.244326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.244591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.244632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.244907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.244948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.245213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.245252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.245536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.245577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.245782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.245821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.246049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.246090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.246371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.246411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.246604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.246643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.246901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.246942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.247105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.247145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.247429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.247469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.247661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.247701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.247902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.247945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.248217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.248263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.248567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.248608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.248764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.248804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.249027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.249066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.249258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.249296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.249507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.249687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.249725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.249869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.250145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.250185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.250399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.250440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.250649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.250688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.250924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.251199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.251240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.251501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.251541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.251774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.251816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.252037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.252079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.252271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.252311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.252515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.252812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.252862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.253124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.253325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.253366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.253561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.253602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.253868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.048 [2024-12-10 00:17:35.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.048 qpair failed and we were unable to recover it. 00:35:51.048 [2024-12-10 00:17:35.254199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.254241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.254392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.254634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.254674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.254877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.254918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.255211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.255252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.255414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.255458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.255627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.255668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.255863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.255904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.256166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.256207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.256406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.256447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.256645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.256684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.256929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.256972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.257114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.257155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.257303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.257343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.257570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.257611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.257876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.257918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.258123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.258164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.258460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.258507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.258793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.258844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.259054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.259094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.259294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.259335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.259616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.259657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.259781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.259832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.260063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.260103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.260304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.260633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.260674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.260816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.260864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.261071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.261111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.261322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.261362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.261598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.261639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.261839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.261881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.262042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.262083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.262216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.262257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.262539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.262580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.262731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.262773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.263051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.263092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.263356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.263396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.263674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.263716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.049 [2024-12-10 00:17:35.263920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.049 [2024-12-10 00:17:35.263962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.049 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.264172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.264213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.264370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.264412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.264621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.264662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.264887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.264928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.265187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.265227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.265539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.265580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.265807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.265878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.266121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.266161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.266367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.266407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.266639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.266679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.266942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.266984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.267232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.267272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.267562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.267605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.267839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.267883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.268138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.268378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.268419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.268637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.268845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.268887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.269084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.269133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.269445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.269627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.269673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.269918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.270177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.270218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.270504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.270546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.270808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.270876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.271140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.271183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.271404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.271454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.272054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.272307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.272349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.272622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.272667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.272948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.273105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.273154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.273421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.273464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.273659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.273699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.273968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.274318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.274366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.274522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.050 [2024-12-10 00:17:35.274575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.050 qpair failed and we were unable to recover it. 00:35:51.050 [2024-12-10 00:17:35.274817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.274869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.275079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.275130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.275343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.275383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.275681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.275738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.275986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.276028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.276386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.276604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.276645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.276912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.276962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.277193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.277236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.277519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.277560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.277834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.277875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.278022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.278063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.278328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.278371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.278632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.278675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.278875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.278917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.279190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.279234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.279505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.279548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.279754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.279796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.279966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.280007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.280199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.280240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.280522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.280570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.280875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.280916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.281171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.281212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.281358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.281398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.281582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.281871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.281913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.282177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.282218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.282424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.282465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.282702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.282981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.283023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.283348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.283634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.283675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.283892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.283933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.284164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.284206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.284443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.284485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.051 [2024-12-10 00:17:35.284720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.051 [2024-12-10 00:17:35.284760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.051 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.284973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.285014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.285221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.285262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.285392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.285432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.285695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.285735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.285997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.286039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.286263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.286303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.286516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.286556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.286863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.286906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.287131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.287171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.287436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.287476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.287759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.287800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.288047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.288089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.288296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.288336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.288542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.288582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.288790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.288841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.288990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.289031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.289312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.289352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.289557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.289598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.289858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.289900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.290203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.290243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.290502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.290543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.290854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.290896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.291133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.291174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.291386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.291427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.291710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.291757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.291987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.292029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.292307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.292567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.292607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.292880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.292921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.293149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.293189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.293471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.293511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.293740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.293780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.293915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.293956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.294198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.294405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.294446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.294586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.294909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.294952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.295163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.295203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.052 [2024-12-10 00:17:35.295420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.052 [2024-12-10 00:17:35.295462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.052 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.295657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.295698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.295907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.295949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.296160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.296200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.296483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.296523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.296784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.296833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.296978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.297018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.297266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.297594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.297851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.297893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.298052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.298093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.298299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.298339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.298535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.298783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.298835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.299092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.299134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.299431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.299470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.299616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.299657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.299922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.299965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.300115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.300156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.300381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.300421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.300631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.300671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.300895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.300937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.301139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.301180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.301392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.301432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.301665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.301705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.301913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.301955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.302262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.302309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.302566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.302606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.302751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.302791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.303025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.303066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.303349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.303390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.303582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.303623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.303860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.303902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.304108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.304148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.304408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.304655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.304906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.304948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.305207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.305248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.305459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.305499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.053 [2024-12-10 00:17:35.305757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.053 [2024-12-10 00:17:35.305798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.053 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.306141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.306182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.306441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.306481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.306690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.306730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.307009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.307051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.307313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.307354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.307622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.307662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.307810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.307864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.308142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.308183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.308378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.308419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.308575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.308615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.308925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.308967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.309179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.309220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.309368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.309408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.309692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.309733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.309959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.310002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.310147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.310419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.310459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.310693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.310734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.310977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.311019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.311244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.311285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.311495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.311536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.311673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.311713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.311865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.311906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.312121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.312162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.312448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.312488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.312754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.312794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.313086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.313133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.313278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.313319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.313605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.313645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.313855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.313920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.314220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.314260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.314417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.314457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.314616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.314656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.314918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.315171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.315212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.315406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.315445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.315668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.315709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.315853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.315896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.316156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.054 [2024-12-10 00:17:35.316196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.054 qpair failed and we were unable to recover it. 00:35:51.054 [2024-12-10 00:17:35.316426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.316467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.316602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.316643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.316850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.316891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.317193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.317233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.317537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.317578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.317788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.317857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.318141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.318182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.318389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.318429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.318689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.318729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.319032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.319075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.319305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.319345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.319493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.319533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.319813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.319861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.320090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.320131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.320412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.320453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.320663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.320703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.320901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.320941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.321124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.321332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.321372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.321573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.321613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.321833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.321874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.322083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.322124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.322267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.322307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.322516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.322556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.322768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.322808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.323031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.323072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.323282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.323322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.323515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.323561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.323847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.323889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.324166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.324206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.324419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.324460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.324746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.324785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.325059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.325100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.325308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.325349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.325501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.325541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.055 [2024-12-10 00:17:35.325756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.055 [2024-12-10 00:17:35.325796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.055 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.325974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.326016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.326177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.326218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.326423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.326463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.326674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.326714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.326922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.326964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.327229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.327270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.327463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.327504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.327693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.327733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.328040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.328083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.328229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.328269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.328576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.328616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.328918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.328959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.329243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.329282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.329427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.329468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.329628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.329670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.329947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.329988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.330195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.330235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.330379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.330419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.330620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.330866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.330907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.331169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.331210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.331337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.331377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.331518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.331558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.331786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.331837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.332098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.332139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.332275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.332315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.332532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.332573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.332843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.332885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.333117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.333159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.333379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.333419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.333628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.333669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.333928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.333978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.334187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.334437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.334477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.334673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.334713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.334924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.334965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.335169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.335209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.335360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.335400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.056 [2024-12-10 00:17:35.335682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.056 [2024-12-10 00:17:35.335721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.056 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.335855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.335896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.336160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.336200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.336392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.336433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.336628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.336668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.336899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.336941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.337213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.337252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.337421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.337462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.337613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.337654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.337916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.338149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.338190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.338315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.338355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.338629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.338669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.339169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.339210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.339429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.339469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.339675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.339715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.339927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.339969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.340182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.340222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.340424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.340464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.340633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.340674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.340940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.340981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.341245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.341286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.341479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.341520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.341760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.341800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.342010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.342050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.342309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.342349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.342483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.342524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.342787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.342837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.343122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.343162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.343428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.343469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.343664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.343704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.343964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.344006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.344202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.344249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.344462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.344502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.344652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.344693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.344888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.344930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.057 qpair failed and we were unable to recover it. 00:35:51.057 [2024-12-10 00:17:35.345125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.057 [2024-12-10 00:17:35.345164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.345308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.345348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.345559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.345600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.345792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.345867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.346012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.346053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.346279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.346320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.346583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.346623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.346820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.347105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.347146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.347366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.347406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.347676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.347908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.347950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.348167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.348502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.348542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.348752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.348961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.349002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.349149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.349189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.349433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.349570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.349612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.349797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.350082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.350384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.350425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.350585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.350626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.350921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.350963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.351166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.351207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.351466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.351506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.351714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.351754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.352003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.352045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.352304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.352344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.352629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.352670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.352931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.352973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.353198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.353382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.353422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.353560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.353600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.353804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.353865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.354074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.354115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.354285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.354331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.354543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.354584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.354777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.354817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.058 qpair failed and we were unable to recover it. 00:35:51.058 [2024-12-10 00:17:35.355087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.058 [2024-12-10 00:17:35.355128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.355413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.355453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.355656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.355696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.355906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.355948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.356143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.356183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.356488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.356528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.356672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.356712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.357008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.357051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.357277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.357317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.357521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.357561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.357822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.357873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.358135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.358176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.358371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.358411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.358631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.358672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.358931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.358972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.359251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.359516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.359556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.359817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.359867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.360149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.360190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.360469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.360510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.360773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.360814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.360975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.361016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.361301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.361341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.361627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.361667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.361946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.361989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.362253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.362294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.362484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.362525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.362832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.362874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.363157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.363198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.363426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.363467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.363773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.363813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.364083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.364124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.364662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.364702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.364970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.365011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.365317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.365358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.365582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.365622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.365913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.365961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.366163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.366203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.059 [2024-12-10 00:17:35.366475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.059 [2024-12-10 00:17:35.366515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.059 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.366812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.366861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.367094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.367135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.367369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.367410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.367640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.367681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.367869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.367911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.368134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.368702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.368742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.369007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.369049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.369199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.369239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.369447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.369488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.369759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.369800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.369974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.370014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.370209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.370249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.370449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.370490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.370627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.370668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.370866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.370908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.371100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.371140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.371350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.371390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.371670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.371711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.371947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.371988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.372288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.372533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.372573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.372802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.372865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.373127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.373174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.373381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.373749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.374011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.374178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.374218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.374432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.374472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.374777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.374983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.375025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.375229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.375270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.375555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.375595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.375889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.060 [2024-12-10 00:17:35.375931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.060 qpair failed and we were unable to recover it. 00:35:51.060 [2024-12-10 00:17:35.376145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.376186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.376446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.376746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.376786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.377111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.377326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.377367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.377652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.377692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.377977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.378018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.378227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.378268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.378483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.378523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.378715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.378755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.379033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.379076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.379338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.379378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.379657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.379697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.380005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.380047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.380332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.380372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.380663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.380704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.380903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.380946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.381217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.381375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.381415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.381678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.381717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.381994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.382036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.382329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.382370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.382518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.382558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.382819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.382871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.383140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.383180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.383468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.383509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.383740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.384090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.384132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.384393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.384434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.384624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.384887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.384928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.385136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.385176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.385457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.385498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.385697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.385737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.385983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.386247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.386287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.386556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.386595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.386893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.386935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.061 [2024-12-10 00:17:35.387203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.061 [2024-12-10 00:17:35.387243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.061 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.387406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.387445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.387747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.387787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.388093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.388134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.388439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.388480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.388701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.388742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.389066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.389107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.389332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.389372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.389655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.389695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.389983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.390027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.390179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.390219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.390495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.390536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.390835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.390878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.391156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.391197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.391462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.391501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.391729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.391768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.391979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.392020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.392242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.392282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.392564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.392604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.392865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.393125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.393165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.393458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.393497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.393779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.394053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.394094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.394242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.394282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.394562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.394602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.394884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.394926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.395210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.395251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.395477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.395516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.395661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.395701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.395984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.396027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.396309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.396360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.396642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.396683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.396940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.396982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.397271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.397311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.397537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.397577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.397784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.397833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.398122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.398162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.398432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.398473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.062 [2024-12-10 00:17:35.398669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.062 [2024-12-10 00:17:35.398710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.062 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.398920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.398963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.399273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.399313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.399530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.399578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.399845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.399887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.400171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.400211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.400459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.400500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.400791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.400842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.401062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.401103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.401408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.401447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.401674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.401715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.402017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.402059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.402319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.402359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.402551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.402842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.402884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.403175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.403215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.403512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.403552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.403851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.403894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.404175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.404215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.404414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.404454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.404743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.404784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.405054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.405095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.405389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.405428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.405663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.405703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.405909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.405951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.406249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.406289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.406499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.406539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.406800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.406853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.407100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.407140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.407447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.407487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.407771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.408059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.408100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.408400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.408446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.408777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.408818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.409034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.409279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.409471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.409512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.409797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.409848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.410027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.410068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.410353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.063 [2024-12-10 00:17:35.410393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.063 qpair failed and we were unable to recover it. 00:35:51.063 [2024-12-10 00:17:35.410705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.410745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.411076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.411118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.411402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.411443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.411670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.411710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.411911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.411954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.412265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.412306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.412619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.412660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.412942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.412984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.413197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.413533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.413783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.413851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.414123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.414164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.414429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.414470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.414746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.414786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.415064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.415105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.415302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.415343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.415471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.415511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.415740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.416029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.416090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.416406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.416634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.416674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.416949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.416990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.417206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.417246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.417557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.417597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.417863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.417927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.418155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.418195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.418348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.418388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.418673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.418713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.419007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.419049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.419351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.419391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.419676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.419716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.419976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.420018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.420249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.420295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.420591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.420631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.420936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.420978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.064 [2024-12-10 00:17:35.421246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.064 [2024-12-10 00:17:35.421287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.064 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.421576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.421617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.421844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.421886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.422197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.422237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.422511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.422551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.422764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.422804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.423039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.423080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.423293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.423334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.423562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.423602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.423914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.423956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.424187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.424228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.424522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.424563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.424735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.425001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.425042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.425312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.425352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.425671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.425729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.425968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.426010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.426297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.426338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.426582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.426622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.426844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.426887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.427198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.427239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.427545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.427585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.427869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.427911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.428192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.428233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.428490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.428727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.428767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.428942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.428984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.429244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.429285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.429556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.429596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.429875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.429917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.430215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.430256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.430522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.430562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.430787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.430838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.431146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.431187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.431450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.431491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.431772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.432046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.432087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.432324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.432370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.432581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.432622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.065 [2024-12-10 00:17:35.432886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.065 [2024-12-10 00:17:35.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.065 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.433180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.433442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.433483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.433766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.434101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.434141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.434427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.434467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.434729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.434769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.435001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.435042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.435235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.435277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.435480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.435673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.435713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.436023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.436065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.436357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.436396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.436601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.436641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.436946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.436988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.437271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.437310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.437595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.437635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.437921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.437963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.438177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.438218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.438529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.438569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.438716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.438756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.439051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.439093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.439288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.439328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.439615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.439656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.439941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.439983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.440291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.440331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.440664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.440892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.440933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.441143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.441183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.441396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.441436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.441646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.441686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.441973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.442014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.442276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.442317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.442593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.442633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.442851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.442893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.443055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.443096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.443293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.443333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.443646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.443686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.444030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.066 [2024-12-10 00:17:35.444329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.066 [2024-12-10 00:17:35.444369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.066 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.444584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.444624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.444907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.444949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.445231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.445271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.445559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.445599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.445885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.445927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.446162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.446203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.446511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.446551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.446842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.446884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.447193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.447234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.447376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.447416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.447646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.447686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.447922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.447964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.448271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.448312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.448607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.448859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.448900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.449183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.449223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.449436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.449476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.449709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.449749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.450040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.450081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.450390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.450431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.450655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.450695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.450907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.450949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.451235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.451275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.451490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.451530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.451846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.451888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.452188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.452229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.452444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.452485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.452753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.452793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.453044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.453282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.453322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.453612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.453653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.453893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.453936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.454239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.454279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.454559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.454600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.454871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.454913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.455182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.455222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.455499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.455540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.455841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.455882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.456145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.067 [2024-12-10 00:17:35.456216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.067 qpair failed and we were unable to recover it. 00:35:51.067 [2024-12-10 00:17:35.456427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.456467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.456807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.457092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.457133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.457416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.457458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.457723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.457763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.458026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.458290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.458331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.458608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.458648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.458986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.459227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.459268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.459569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.459609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.459907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.460234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.460274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.460534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.460574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.460864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.460905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.461106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.461147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.461412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.461452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.461677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.461717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.462003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.462045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.462241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.462282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.462576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.462617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.462863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.462906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.463179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.463468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.463508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.463792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.463842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.464110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.464152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.464446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.464487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.464773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.464814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.465133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.465174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.465464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.465504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.465812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.465862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.466172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.466213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.466503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.466543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.466847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.466890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.467188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.467378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.467418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.467723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.467763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.468094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.468135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.468348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.068 [2024-12-10 00:17:35.468389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.068 qpair failed and we were unable to recover it. 00:35:51.068 [2024-12-10 00:17:35.468670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.468716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.468969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.469010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.469298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.469339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.469627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.469667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.469954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.469996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.470323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.470363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.470624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.470665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.470969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.471013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.471304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.471345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.471654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.471695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.471990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.472032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.472249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.472289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.472552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.472750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.472790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.473134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.473176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.473453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.473494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.473722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.473763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.474059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.474100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.474363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.474727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.474998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.475040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.475296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.475612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.475937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.475979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.476127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.476325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.476365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.476599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.476639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.476973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.477215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.477256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.477494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.477534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.477844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.477885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.478155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.478196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.478462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.478503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.478790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.478858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.479148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.479188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.069 qpair failed and we were unable to recover it. 00:35:51.069 [2024-12-10 00:17:35.479501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.069 [2024-12-10 00:17:35.479541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.479776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.479816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.480030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.480071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.480424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.480715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.480755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.481062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.481110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.481346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.481386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.481607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.481648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.481860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.481902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.482122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.482163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.482381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.482736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.482776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.483036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.483191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.483231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.483517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.483557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.483853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.483896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.484209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.484249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.484544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.484584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.484896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.484938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.485171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.485212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.485525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.485565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.485855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.485898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.486153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.486193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.486453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.486494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.486706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.486746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.486999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.487041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.487256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.487296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.487611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.487883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.487924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.488223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.488263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.488553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.488908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.488949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.489224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.489265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.489538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.489579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.489789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.489840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.490136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.490178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.490460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.490501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.490737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.490777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.491089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.491131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.070 [2024-12-10 00:17:35.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.070 [2024-12-10 00:17:35.491448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.070 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.491722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.491763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.492060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.492102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.492396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.492436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.492734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.492775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.493075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.493116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.493422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.493467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.493739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.493780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.494085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.494127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.494423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.494463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.494775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.494816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.495112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.495153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.495451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.495492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.495765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.495805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.496062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.496294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.496335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.496555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.496595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.496914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.496957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.497276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.497317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.497613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.497653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.071 [2024-12-10 00:17:35.497948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.071 [2024-12-10 00:17:35.497990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.071 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.498238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.498281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.498572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.498611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.498844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.498888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.499169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.499210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.499504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.499697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.499737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.500014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.500237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.500278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.500595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.500636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.500869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.500912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.501225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.501266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.501646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.501949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.501991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.502282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.502323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.502604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.502644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.502944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.502986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.503266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.503306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.503603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.503643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.503940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.504270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.504310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.504456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.504497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.504702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.504743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.505050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.505092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.505299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.505340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.505567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.505608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.505771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.505817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.506121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.506162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.506420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.506461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.506760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.506802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.507112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.507154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.348 qpair failed and we were unable to recover it. 00:35:51.348 [2024-12-10 00:17:35.507431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.348 [2024-12-10 00:17:35.507471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.507769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.507811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.508171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.508496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.508538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.508812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.508869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.509146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.509188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.509401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.509444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.509771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.509811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.510104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.510147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.510425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.510704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.510745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.511074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.511117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.511343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.511392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.511660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.511702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.511998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.512042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.512214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.512575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.512615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.512891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.512934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.513157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.513197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.513524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.513565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.513813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.513867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.514081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.514123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.514440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.514482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.514870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.515101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.515364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.515406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.515647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.515687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.515917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.515960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.516237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.516525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.516566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.516791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.516842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.517155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.517197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.517424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.517465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.517751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.517793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.518102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.518145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.518384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.518438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.518668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.518709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.518989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.519030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.519272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.349 [2024-12-10 00:17:35.519314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.349 qpair failed and we were unable to recover it. 00:35:51.349 [2024-12-10 00:17:35.519560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.519601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.519876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.519919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.520163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.520205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.520505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.520546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.520756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.520797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.521130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.521172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.521400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.521441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.521684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.521724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.522024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.522068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.522318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.522359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.522604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.522645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.522919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.522962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.523236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.523277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.523584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.523625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.523884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.523927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.524134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.524176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.524458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.524498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.524797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.525197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.525238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.525520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.525561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.525858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.525901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.526128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.526170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.526456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.526497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.526650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.526697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.526998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.527041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.527322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.527363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.527660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.527700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.527973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.528016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.528221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.528262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.528606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.528811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.528864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.529191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.529232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.529471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.529512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.529665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.529706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.529938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.529980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.530200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.530241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.530494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.530534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.530857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.530901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.350 [2024-12-10 00:17:35.531188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.350 [2024-12-10 00:17:35.531231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.350 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.531378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.531419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.531697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.531983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.532358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.532399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.532610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.532651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.532804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.532858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.533101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.533151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.533405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.533448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.533745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.533786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.534037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.534082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.534368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.534412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.534703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.535004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.535048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.535262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.535307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.535631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.535674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.535902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.535944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.536265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.536306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.536604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.536645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.536893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.536935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.537233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.537274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.537475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.537516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.537759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.537801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.538001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.538043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.538267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.538308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.538535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.538585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.538887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.538929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.539186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.539228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.539497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.539543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.539760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.539801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.540042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.540085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.540411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.541074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.541118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.541362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.541405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.541732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.541773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.542029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.542073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.542348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.542679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.542720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.351 qpair failed and we were unable to recover it. 00:35:51.351 [2024-12-10 00:17:35.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.351 [2024-12-10 00:17:35.542933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.543176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.543217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.543385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.543426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.543582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.543637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.543875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.543919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.544148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.544192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.544450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.544760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.544805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.545118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.545163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.545443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.545696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.545742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.545992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.546039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.546246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.546294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.546455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.546503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.546670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.546714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.546946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.546994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.547238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.547282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.547462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.547513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.547690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.547738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.548059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.548106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.548264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.548313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.548486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.548530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.548759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.548808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.548978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.549023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.549239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.549289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.549442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.549484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.549644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.549692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.549947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.549990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.550208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.550249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.550416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.550458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.550609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.550651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.550819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.550874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.551172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.352 [2024-12-10 00:17:35.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.352 qpair failed and we were unable to recover it. 00:35:51.352 [2024-12-10 00:17:35.551442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.551484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.551782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.551838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.552130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.552411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.552452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.552725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.553017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.553283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.553332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.553642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.553684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.553924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.553969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.554214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.554255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.554550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.554592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.554821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.554877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.555092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.555134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.555588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.555629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.555868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.555911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.556119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.556160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.556457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.556499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.556744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.556785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.557050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.557092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.557401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.557444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.557670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.557711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.557959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.558010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.558239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.558280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.558502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.558542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.558849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.558893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.559121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.559162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.559502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.559658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.559698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.559994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.560035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.560181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.560222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.560444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.560486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.560734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.560774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.561009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.561058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.561308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.561349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.561665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.353 [2024-12-10 00:17:35.561706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.353 qpair failed and we were unable to recover it. 00:35:51.353 [2024-12-10 00:17:35.561928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.561970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.562195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.562234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.562531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.562571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.562727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.562767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.562995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.563328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.563370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.563594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.563635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.563939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.563981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.564313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.564354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.564598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.564638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.564882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.564924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.565173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.565440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.565480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.565812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.565866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.566175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.566216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.566538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.566578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.566872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.566915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.567156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.567198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.567419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.567459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.567727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.567767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.567912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.567953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.568160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.568201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.568472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.568514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.568773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.568814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.569079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.569325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.569366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.569661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.569701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.569887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.570182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.570477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.570518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.570680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.570721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.571003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.571045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.571318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.354 [2024-12-10 00:17:35.571359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.354 qpair failed and we were unable to recover it. 00:35:51.354 [2024-12-10 00:17:35.571586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.571627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.571901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.571942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.572241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.572282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.572651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.572874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.573197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.573239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.573466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.573507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.573711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.573753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.574055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.574098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.574367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.574407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.574686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.574727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.575024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.575066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.575233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.575274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.575560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.575600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.575890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.575932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.576156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.576195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.576416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.576457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.576739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.576779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.577095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.577138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.577288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.577328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.577478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.577663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.577704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.577921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.577962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.578203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.578243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.578462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.578502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.578788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.578840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.579089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.579129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.579376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.579417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.579690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.579730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.579894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.579936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.580257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.580298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.580604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.580878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.581077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.581117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.581395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.581435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.355 [2024-12-10 00:17:35.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.355 [2024-12-10 00:17:35.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.355 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.582074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.582116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.582339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.582380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.582698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.582739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.583002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.583043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.583305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.583623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.583664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.583883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.583926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.584174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.584215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.584471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.584520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.584725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.584766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.585005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.585047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.585287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.585328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.585632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.585673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.585995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.586036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.586263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.586303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.586559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.586600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.586846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.586888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.587134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.587175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.587467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.587507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.587820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.587874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.588052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.588093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.588260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.588300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.588583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.588624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.588909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.588951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.589230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.589271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.589555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.589595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.589889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.589930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.590222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.590263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.590598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.590638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.590952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.590995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.591215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.591255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.591575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.591616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.356 [2024-12-10 00:17:35.591973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.356 [2024-12-10 00:17:35.592016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.356 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.592296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.592337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.592591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.592631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.592881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.593226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.593266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.593508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.593548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.593850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.593891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.594208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.594249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.594545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.594586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.594907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.594948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.595244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.595285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.595526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.595880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.595921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.596121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.596161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.596460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.596502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.596787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.596841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.597095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.597366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.597406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.597723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.597764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.597994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.598278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.598318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.598615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.598655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.598952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.598994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.599293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.599334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.599671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.599960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.600001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.600265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.357 [2024-12-10 00:17:35.600305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.357 qpair failed and we were unable to recover it. 00:35:51.357 [2024-12-10 00:17:35.600537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.600578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.600819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.600870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.601118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.601158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.601474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.601516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.601811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.601863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.602141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.602181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.602551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.602592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.602896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.602938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.603217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.603257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.603528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.603569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.603812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.603880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.604101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.604142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.604392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.604433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.604705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.604746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.605082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.605125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.605388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.605643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.605684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.605904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.605946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.606229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.606270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.606437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.606478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.606762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.606803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.607046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.607087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.607319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.607360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.607669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.607710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.607876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.607918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.608144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.608185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.608485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.608526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.608812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.609103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.358 [2024-12-10 00:17:35.609266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.358 [2024-12-10 00:17:35.609313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.358 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.609567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.609608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.609911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.609953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.610182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.610222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.610564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.610605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.610761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.610802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.611027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.611068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.611274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.611315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.611633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.611676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.611960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.612003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.612179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.612220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.612525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.612565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.612866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.613091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.613510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.613551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.613843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.613885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.614159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.614200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.614414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.614455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.614774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.614815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.615052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.615093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.615400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.615735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.615776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.616102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.616143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.616466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.616507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.616721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.616963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.617006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.617236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.617276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.617576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.617618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.617964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.618007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.618250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.618291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.618452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.618492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.618694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.618735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.618958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.619000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.619227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.619267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.619424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.619465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.619703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.619744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.619927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.359 [2024-12-10 00:17:35.619969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.359 qpair failed and we were unable to recover it. 00:35:51.359 [2024-12-10 00:17:35.620137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.620178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.620388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.620430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.620751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.620791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.621058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.621105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.621280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.621587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.621629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.621904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.621947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.622173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.622214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.622505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.622642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.622683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.622964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.623005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.623350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.623648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.623688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.623922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.623964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.624181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.624221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.624452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.624492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.624726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.624766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.624958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.625001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.625169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.625209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.625533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.625573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.625865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.625938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.626229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.626468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.626517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.626859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.627237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.627286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.627547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.627595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.627817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.627878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.628197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.628238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.628574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.628615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.628854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.628896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.629149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.629190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.629508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.629550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.629858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.629901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.630219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.630619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.630954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.631232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.631273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.360 qpair failed and we were unable to recover it. 00:35:51.360 [2024-12-10 00:17:35.631560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.360 [2024-12-10 00:17:35.631600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.631843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.631885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.632193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.632234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.632487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.632528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.632690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.632731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.632968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.633010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.633240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.633286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.633595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.633637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.633948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.634155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.634196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.634493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.634533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.634750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.634791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.635124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.635416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.635457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.635701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.635742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.636056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.636099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.636397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.636438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.636738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.636779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.637119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.637161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.637478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.637519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.637791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.637863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.638164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.638204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.638421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.638461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.638699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.638740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.638972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.639015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.639232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.639273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.639583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.639624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.639929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.639971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.640141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.640182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.640456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.640498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.640765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.640805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.641125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.641167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.641516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.641557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.641859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.641903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.361 [2024-12-10 00:17:35.642130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.361 [2024-12-10 00:17:35.642172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.361 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.642519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.642560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.642874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.642916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.643233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.643497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.643537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.643760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.643800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.644043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.644084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.644387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.644428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.644649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.644689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.644975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.645017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.645241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.645282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.645519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.645561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.645865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.645913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.646151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.646192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.646420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.646462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.646774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.646815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.647048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.647089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.647257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.647298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.647619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.647660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.647955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.647997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.648220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.648261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.648538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.648579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.648877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.648920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.649087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.649128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.649299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.649340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.649611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.649652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.649882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.649925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.650135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.650175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.650361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.650401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.650698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.650739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.650994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.362 [2024-12-10 00:17:35.651036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.362 qpair failed and we were unable to recover it. 00:35:51.362 [2024-12-10 00:17:35.651274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.651315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.651644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.651685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.651949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.651991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.652287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.652328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.652574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.652615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.652848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.652891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.653155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.653196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.653459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.653683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.653725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.653983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.654025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.654186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.654227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.654448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.654489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.654851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.655174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.655216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.655455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.655497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.655793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.655844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.656099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.656141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.656435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.656475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.656808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.656860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.657180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.657221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.657458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.657498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.657721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.657768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.658077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.658119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.658414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.658456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.658727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.658768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.659060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.659102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.659345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.659386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.659681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.659722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.659956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.659999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.660241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.660282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.660575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.660615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.660891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.660934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.661234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.661275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.661511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.661551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.661850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.661892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.662143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.662184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.662359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.363 [2024-12-10 00:17:35.662400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.363 qpair failed and we were unable to recover it. 00:35:51.363 [2024-12-10 00:17:35.662627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.662813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.662884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.663168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.663209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.663461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.663501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.663805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.663860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.664137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.664178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.664505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.664790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.664846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.665157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.665197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.665472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.665513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.665794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.665847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.666079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.666121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.666386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.666426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.666582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.666623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.666895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.666938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.667154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.667195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.667513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.667555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.667789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.667840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.668054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.668094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.668322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.668363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.668659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.668700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.668917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.668959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.669176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.669218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.669520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.669560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.669805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.670157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.670198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.670474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.670514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.670810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.670866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.671009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.671049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.671349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.671390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.671636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.671676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.671987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.672265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.672306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.672526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.672888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.672930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.673160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.673201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.364 [2024-12-10 00:17:35.673504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.364 [2024-12-10 00:17:35.673544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.364 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.673785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.673853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.674095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.674137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.674345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.674385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.674689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.674729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.674952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.674995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.675271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.675312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.675646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.675686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.675962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.676005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.676281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.676322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.676603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.676644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.676859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.676902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.677141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.677181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.677464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.677504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.677800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.677853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.678199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.678241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.678487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.678527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.678839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.678882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.679221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.679262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.679599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.680038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.680336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.680618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.680659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.680892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.680934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.681180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.681220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.681444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.681484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.681747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.681787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.682027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.682068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.682274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.682315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.682546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.682588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.682886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.682929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.683203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.683243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.683549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.683590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.683812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.683866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.684157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.684198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.684381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.684422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.684704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.684745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.685037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.685079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.685306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.685348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.365 [2024-12-10 00:17:35.685656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.365 [2024-12-10 00:17:35.685696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.365 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.685963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.686006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.686284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.686325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.686599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.686640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.686996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.687229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.687270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.687546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.687586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.687863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.687905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.688195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.688237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.688493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.688534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.688841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.688882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.689188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.689228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.689416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.689457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.689756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.689796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.689979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.690021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.690188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.690229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.690551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.690894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.690936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.691185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.691228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.691444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.691484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.691811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.691867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.692116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.692157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.692469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.692510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.692784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.692836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.693044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.693326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.693367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.693662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.693703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.693959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.694001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.694292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.694333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.694621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.694866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.694910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.695139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.695181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.695425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.695466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.695701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.695742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.695961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.696004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.696304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.696345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.696577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.696618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.696952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.366 [2024-12-10 00:17:35.696994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.366 qpair failed and we were unable to recover it. 00:35:51.366 [2024-12-10 00:17:35.697317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.697358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.697584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.697624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.697952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.697995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.698270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.698311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.698469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.698510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.698738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.698779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.699023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.699065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.699292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.699332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.699500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.699540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.699848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.699890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.700052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.700093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.700333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.700374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.700668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.700709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.701002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.701044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.701237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.701278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.701563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.701604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.701898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.701939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.702212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.702253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.702497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.702816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.702868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.703123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.703163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.703473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.703514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.703806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.703857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.704162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.704203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.704447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.704669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.704710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.704927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.704970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.705243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.705284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.705424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.705464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.705778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.705818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.706091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.706132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.706372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.706413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.706653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.367 [2024-12-10 00:17:35.706695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.367 qpair failed and we were unable to recover it. 00:35:51.367 [2024-12-10 00:17:35.706977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.707019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.707292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.707332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.707676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.707716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.708053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.708095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.708272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.708312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.708617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.708658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.708879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.708922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.709243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.709283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.709552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.709855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.709896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.710122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.710163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.710410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.710451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.710758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.710799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.711042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.711083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.711376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.711417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.711714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.711755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.712017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.712059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.712292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.712332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.712613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.712654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.712838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.712881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.713109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.713150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.713310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.713351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.713513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.713554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.713722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.713762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.714072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.714114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.714353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.714401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.714655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.714696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.714934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.714976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.715285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.715325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.715575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.715616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.715917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.715959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.716185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.716226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.716485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.716526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.716691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.716731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.717002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.717043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.717268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.717309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.717585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.717627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.717879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.717920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.368 qpair failed and we were unable to recover it. 00:35:51.368 [2024-12-10 00:17:35.718175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.368 [2024-12-10 00:17:35.718216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.718467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.718508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.718807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.718860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.719160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.719200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.719491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.719532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.719874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.719917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.720155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.720196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.720422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.720463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.720735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.720776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.721062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.721104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.721390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.721430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.721736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.721777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.721965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.722006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.722179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.722220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.722465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.722507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.722714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.722755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.723069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.723111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.723406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.723447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.723762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.723802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.724036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.724077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.724380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.724423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.724685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.724724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.724948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.724990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.725288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.725329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.725591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.725631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.725970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.726268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.726309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.726606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.726652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.726907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.726949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.727237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.727278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.727500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.727774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.727814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.728083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.728124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.728395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.728436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.728713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.728753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.729055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.729098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.729274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.729314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.729683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.729724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.729977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.369 [2024-12-10 00:17:35.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.369 qpair failed and we were unable to recover it. 00:35:51.369 [2024-12-10 00:17:35.730187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.730228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.730467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.730508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.730844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.730886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.731123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.731164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.731317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.731358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.731701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.731995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.732037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.732255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.732295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.732549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.732589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.732869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.732912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.733129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.733170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.733467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.733508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.733806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.733860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.734097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.734137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.734386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.734428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.734653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.734695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.734958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.735000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.735221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.735262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.735466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.735507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.735721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.735761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.736016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.736059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.736320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.736362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.736587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.736627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.736953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.736995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.737325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.737367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.737649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.737690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.737895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.738162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.738202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.738421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.738467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.738764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.738805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.739115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.739157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.739450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.739490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.739749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.739790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.739978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.740019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.740188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.740228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.740531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.740572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.740889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.741118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.741158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.741431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.370 [2024-12-10 00:17:35.741472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.370 qpair failed and we were unable to recover it. 00:35:51.370 [2024-12-10 00:17:35.741765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.741806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.741983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.742024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.742210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.742519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.742560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.742900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.742942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.743254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.743295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.743627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.743668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.743887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.743929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.744114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.744155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.744476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.744517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.744732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.744773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.745082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.745335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.745375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.745670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.745711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.746008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.746050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.746264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.746305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.746556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.746598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.746848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.746890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.747185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.747227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.747525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.747567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.747876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.747918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.748159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.748200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.748480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.748521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.748789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.748856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.749122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.749162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.749462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.749504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.749800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.749855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.750113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.750299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.750340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.750647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.750694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.750988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.751030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.751205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.751246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.751525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.751567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.751883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.371 qpair failed and we were unable to recover it. 00:35:51.371 [2024-12-10 00:17:35.752130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.371 [2024-12-10 00:17:35.752171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.752450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.752493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.752729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.753006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.753048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.753278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.753319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.753645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.753686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.753983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.754025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.754272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.754321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.754625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.754666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.754939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.754982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.755159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.755200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.755417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.755457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.755750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.756020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.756061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.756289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.756329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.756555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.756595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.756768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.756809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.757043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.757085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.757271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.757635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.757676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.757848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.757890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.758167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.758208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.758506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.758547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.758707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.758748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.759052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.759095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.759313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.759355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.759558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.759599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.759812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.759869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.760097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.760138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.760437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.760477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.760714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.760754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.761011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.761053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.761353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.761393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.761687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.761729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.761994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.762037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.762315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.762363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.762666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.372 [2024-12-10 00:17:35.762707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.372 qpair failed and we were unable to recover it. 00:35:51.372 [2024-12-10 00:17:35.762926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.762969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.763141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.763182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.763480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.763520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.763692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.763733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.763902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.763945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.764183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.764223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.764476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.764517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.764807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.764863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.765099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.765139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.765463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.765504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.765805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.765857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.766146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.766186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.766417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.766458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.766758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.766800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.767060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.767103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.767329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.767370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.767643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.767684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.767919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.767962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.768242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.768283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.768577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.768618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.768914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.768956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.769253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.769295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.769538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.769584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.769932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.769974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.770200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.770240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.770423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.770465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.770687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.770727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.770945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.770988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.771283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.771325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.771555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.771596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.771961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.772005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.772233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.772274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.772564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.772604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.772883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.772925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.773093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.773133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.773425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.373 [2024-12-10 00:17:35.773467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.373 qpair failed and we were unable to recover it. 00:35:51.373 [2024-12-10 00:17:35.773754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.773794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.774048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.774090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.774367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.774415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.774641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.774683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.774908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.774951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.775166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.775205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.775379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.775419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.775642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.775683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.775938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.775981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.776279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.776320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.776605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.776647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.776899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.776942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.777188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.777230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.777450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.777491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.777786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.777837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.778136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.778177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.778414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.778455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.778693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.778734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.778955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.778997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.779309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.779350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.779596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.779636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.779961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.780004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.780280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.780321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.780633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.780674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.780935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.780978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.781253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.781294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.781558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.781599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.781907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.781949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.782190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.782232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.782555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.782596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.782910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.782952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.783271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.783313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.783556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.783595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.783814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.783870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.784100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.784141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.784363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.784403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.784702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.374 [2024-12-10 00:17:35.784993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.374 [2024-12-10 00:17:35.785036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.374 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.785264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.785305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.785612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.785653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.785934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.785976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.786221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.786262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.786570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.786616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.786893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.786936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.787095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.787137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.787414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.787455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.787750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.787791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.788042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.788085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.788307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.788348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.788674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.788715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.788957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.789000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.789231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.789273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.789559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.789600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.789880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.789923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.790100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.790141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.790431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.790472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.790694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.790735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.791062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.791104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.791339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.791380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.791684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.791725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.792059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.792102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.792331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.792376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.792567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.792608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.792863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.793092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.793134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.793406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.793607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.793648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.793898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.793944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.794243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.794284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.794570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.794611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.794911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.794956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.795137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.795180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.795405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.795447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.795774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.795815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.375 qpair failed and we were unable to recover it. 00:35:51.375 [2024-12-10 00:17:35.796032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.375 [2024-12-10 00:17:35.796074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.796390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.796435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.796775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.796817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.797053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.797094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.797326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.797374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.797610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.797651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.797953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.797995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.798313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.798643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.798947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.798989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.799213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.799254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.799420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.799461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.799678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.799718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.799969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.800292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.800650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.800691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.800971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.801013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.801188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.801228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.801492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.801734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.801775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.801971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.802013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.802261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.802303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.802614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.802656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.802949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.802991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.803268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.803309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.803562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.803603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.803849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.803890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.804066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.804106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.376 [2024-12-10 00:17:35.804301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.376 [2024-12-10 00:17:35.804342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.376 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.804686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.804729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.804950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.804992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.805231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.805273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.805496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.805538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.805861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.805903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.806177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.806217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.806449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.806722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.806763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.806994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.807034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.807308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.807348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.807653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.807694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.807990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.808031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.652 [2024-12-10 00:17:35.808537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.652 [2024-12-10 00:17:35.808580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.652 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.808845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.808888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.809116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.809158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.809460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.809502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.809819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.809876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.810174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.810216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.810516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.810565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.810889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.810932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.811219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.811522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.811564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.811873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.811917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.812072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.812114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.812410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.812453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.812750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.812793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.813052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.813094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.813382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.813424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.813718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.814000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.814044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.814273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.814315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.814611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.814949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.814994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.815217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.815260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.815567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.815945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.815990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.816305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.816348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.816509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.816552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.816855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.816899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.817197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.817239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.817541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.817582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.817881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.817924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.818220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.818565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.818910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.818953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.819241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.819287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.819601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.819644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.819872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.819915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.820143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.820185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.820505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.820552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.820885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.820929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.821165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.821208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.821511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.821554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.821817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.821871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.822080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.822122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.822274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.822316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.822610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.822653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.822953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.822997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.823217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.823266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.823581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.823623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.823848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.823892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.824178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.824221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.824506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.824549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.824820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.824877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.825158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.825201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.825454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.825498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.825790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.825848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.826094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.826136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.826365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.826408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.826695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.826743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.827072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.827116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.827342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.827693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.827736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.827969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.828321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.828364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.828610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.828652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.828871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.828915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.829159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.829484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.829526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.829745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.829788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.830021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.830064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.830366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.830408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.830727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.830770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.830948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.830993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.831218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.831261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.831425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.831468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.831684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.653 [2024-12-10 00:17:35.831726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.653 qpair failed and we were unable to recover it. 00:35:51.653 [2024-12-10 00:17:35.832031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.832074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.832299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.832342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.832661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.832702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.832990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.833034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.833285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.833326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.833637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.833957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.834003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.834268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.834310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.834587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.834630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.834949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.834993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.835306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.835347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.835585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.835628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.835964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.836008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.836301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.836343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.836620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.836663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.836959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.837002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.837251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.837293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.837581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.837624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.837902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.837944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.838221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.838263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.838513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.838556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.838797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.838855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.839069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.839112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.839326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.839368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.839608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.839957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.840001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.840327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.840369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.840667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.840709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.840988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.841032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.841257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.841298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.841542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.841586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.841767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.841810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.842072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.842114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.844024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.844102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.844391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.844436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.844716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.844759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.845104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.845379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.845421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.845746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.846041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.846084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.846259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.846301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.846526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.846569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.846898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.847178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.847220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.847445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.847491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.847723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.847766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.847999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.848043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.848363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.848649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.848693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.849023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.849237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.849279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.849440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.849484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.849795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.849852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.850075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.850303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.850346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.850518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.850561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.850772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.850815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.851115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.851159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.851466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.851510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.851732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.851774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.851959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.852003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.852174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.852217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.852395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.852438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.852597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.852639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.852926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.852970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.853152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.853421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.853464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.853757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.853799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.854144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.854187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.854507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.854731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.854774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.855060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.855104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.855398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.855440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.855655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.855697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.654 qpair failed and we were unable to recover it. 00:35:51.654 [2024-12-10 00:17:35.855972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.654 [2024-12-10 00:17:35.856016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.856291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.856334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.856669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.856712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.856982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.857026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.857236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.857411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.857692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.857735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.857970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.858013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.858289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.858332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.858573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.858616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.858866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.858909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.859159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.859201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.859523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.859567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.859735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.859778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.860017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.860062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.860346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.860663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.860706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.860903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.860949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.861226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.861268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.861575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.861617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.861876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.861919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.862220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.862262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.862567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.862610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.862861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.862904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.863076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.863119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.863393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.863436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.863663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.863705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.863945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.863989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.864212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.864254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.864515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.864767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.864808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.865127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.865171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.865496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.865539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.865744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.865787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.866079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.866123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.866339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.866381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.866629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.866672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.866960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.867004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.867248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.867291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.869169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.869235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.869589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.869635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.869966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.870010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.870251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.870294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.870471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.870514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.870846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.870898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.871172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.871214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.871532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.871575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.871845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.871889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.872177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.872220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.872455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.872497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.872737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.872779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.873020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.873064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.873239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.873281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.873607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.873650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.873877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.873921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.874109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.874151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.874397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.874439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.874713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.874756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.875034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.875079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.877091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.877160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.877441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.877484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.877790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.877851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.878130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.878173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.878417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.878460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.878680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.878722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.879050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.879095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.879369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.879411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.655 [2024-12-10 00:17:35.879588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.655 [2024-12-10 00:17:35.879631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.655 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.879896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.879941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.880182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.880226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.880437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.880480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.880795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.880852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.881134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.881177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.881356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.881398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.881618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.881660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.881952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.881996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.882248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.882291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.882603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.882645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.882926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.883282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.883325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.883545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.883587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.883869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.883912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.884139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.884181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.884343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.884385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.884632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.884697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.884929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.884973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.885263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.885305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.885523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.885564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.885776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.885818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.885996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.886262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.886303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.886467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.886509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.886729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.886951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.886995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.887264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.887492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.887534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.887738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.887780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.888037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.888080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.888414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.888635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.888677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.888847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.888890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.889151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.889193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.889393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.889578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.889801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.889853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.890080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.890122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.890410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.890453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.890739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.890781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.891090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.891132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.891453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.891495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.891817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.891881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.892068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.892111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.892274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.892316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.892543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.892585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.892745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.892787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.893023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.893066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.893294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.893336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.893640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.893950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.893994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.894279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.894322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.894574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.894616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.894866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.894910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.895138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.895180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.895460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.895501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.895728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.895777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.895992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.896036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.896243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.896285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.896527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.896569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.896868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.896912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.897146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.897188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.897478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.897520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.897849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.897893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.898061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.898102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.898270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.898313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.898612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.898653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.898879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.898924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.899202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.899243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.899411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.899453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.899677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.899720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.899894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.899938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.900086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.900129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.656 [2024-12-10 00:17:35.900449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.656 qpair failed and we were unable to recover it. 00:35:51.656 [2024-12-10 00:17:35.900750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.900792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.901018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.901062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.901300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.901342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.901613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.901655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.901869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.902199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.902242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.902405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.902447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.902654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.902696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.902858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.902902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.903092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.903134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.903449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.903491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.903739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.903781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.904018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.904060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.904373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.904415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.904682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.904725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.905005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.905258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.905300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.905615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.905656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.905947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.905990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.906264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.906512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.906554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.906777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.906819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.907134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.907182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.907366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.907407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.907623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.907666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.907952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.907996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.908270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.908313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.908619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.908661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.908976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.909018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.909258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.909300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.909481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.909523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.909775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.910135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.910177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.910497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.910540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.910851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.910895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.911129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.911171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.911352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.911395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.911671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.911713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.911921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.911966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.912181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.912224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.912384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.912427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.912727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.912769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.913068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.913112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.913394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.913436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.913711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.913753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.913981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.914025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.914206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.914249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.914476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.914518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.914746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.914789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.915104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.915148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.915423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.915464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.915743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.915785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.916039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.916082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.916262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.916305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.916458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.916501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.916705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.916747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.917034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.917079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.917248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.917289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.917581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.917624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.917931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.917976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.918298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.918339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.918654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.918696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.918960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.919010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.657 [2024-12-10 00:17:35.919277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.657 qpair failed and we were unable to recover it. 00:35:51.657 [2024-12-10 00:17:35.919601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.919643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.919968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.920012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.920275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.920544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.920587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.920864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.920907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.921203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.921246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.921575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.921618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.921864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.921909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.922141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.922184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.922412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.922454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.922661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.922703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.922936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.922979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.923213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.923257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.923488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.923531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.923807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.923879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.924199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.924242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.924605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.924840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.924885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.925104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.925146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.925365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.925407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.925704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.925746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.926088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.926132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.926355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.926398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.926632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.926674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.926908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.926952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.927167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.927209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.927382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.927424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.927643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.927685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.927980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.928023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.928204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.928246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.928550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.928592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.928847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.928890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.929046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.929306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.929348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.929586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.929628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.929886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.929930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.930220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.930520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.930562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.930782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.930846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.931082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.931124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.931278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.931319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.931475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.931518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.931737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.932108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.932152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.932304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.932346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.932644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.932686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.933005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.933050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.933276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.933317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.933473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.933514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.933734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.933955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.934000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.934241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.934283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.934528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.934570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.934734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.934777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.935014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.935056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.935332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.935375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.935663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.935923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.935967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.936146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.936188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.936406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.936449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.936671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.936713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.937007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.937051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.937280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.937321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.937553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.937596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.937747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.937789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.937997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.938045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.938323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.938365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.938659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.938702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.938996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.939039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.939262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.939305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.939623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.939666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.939895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.939939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.940233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.940276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.940537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.658 [2024-12-10 00:17:35.940742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.658 [2024-12-10 00:17:35.940784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.658 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.941071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.941114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.941398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.941441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.941741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.942031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.942088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.942369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.942411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.942686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.942728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.942947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.942991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.943136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.943179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.943468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.943510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.943806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.943859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.944081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.944359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.944401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.944650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.944692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.945019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.945063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.945301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.945343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.945650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.945691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.946055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.946358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.946401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.946624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.946667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.946932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.946976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.947189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.947231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.947522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.947565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.947715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.947757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.948052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.948096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.948399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.948441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.948737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.948778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.949097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.949140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.949365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.949407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.949726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.949769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.950003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.950047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.950354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.950398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.950739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.950780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.951091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.951136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.951388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.951430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.951654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.951696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.951972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.952016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.952281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.952324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.952600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.952641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.952967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.953011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.953222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.953264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.953615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.953658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.953968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.954012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.954264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.954306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.954635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.954684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.954897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.954941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.955195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.955237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.955455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.955497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.955847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.956082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.956125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.956425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.956468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.956770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.956813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.957132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.957174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.957354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.957397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.957674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.957716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.957961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.958005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.958223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.958265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.958489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.958815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.958871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.959088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.959132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.959355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.959397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.959701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.959744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.959974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.960018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.960245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.960289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.960444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.960487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.960655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.960698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.960953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.960999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.961299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.961342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.961662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.961704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.961986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.962031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.659 qpair failed and we were unable to recover it. 00:35:51.659 [2024-12-10 00:17:35.962351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.659 [2024-12-10 00:17:35.962393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.962606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.962650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.962876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.963219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.963261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.963567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.963610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.963767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.963810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.964046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.964335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.964378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.964595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.964638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.964929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.964971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.965200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.965241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.965533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.965575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.965859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.965903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.966205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.966247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.966537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.966586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.966901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.967192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.967234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.967532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.967574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.967791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.967844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.968075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.968117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.968296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.968337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.968642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.968685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.968921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.968965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.969168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.969454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.969497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.969700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.969742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.969996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.970040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.970268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.970310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.970564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.970762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.970804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.971099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.971143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.971368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.971411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.971626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.971668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.971957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.972000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.972169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.972211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.972524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.972566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.972875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.972919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.973165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.973428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.973471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.973765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.973807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.974028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.974071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.974225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.974268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.974580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.974622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.974873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.974917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.975102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.975144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.975369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.975412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.975719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.975760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.976043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.976089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.976313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.976356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.976590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.976633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.976962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.977006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.977237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.977281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.977514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.977755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.977797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.978037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.978087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.978262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.978304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.978478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.978520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.978849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.979075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.979117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.979424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.979468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.979783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.979836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.980060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.980279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.980321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.980560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.980603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.980785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.980842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.981074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.981117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.981443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.981485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.981782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.981839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.982117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.982413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.982454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.982720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.982763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.983020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.983064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.983227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.983269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.983603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.983645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.983837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.983883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.660 qpair failed and we were unable to recover it. 00:35:51.660 [2024-12-10 00:17:35.984101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.660 [2024-12-10 00:17:35.984143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.984356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.984398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.984675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.984716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.984935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.985211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.985253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.985512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.985554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.985871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.985919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.986198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.986240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.986422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.986464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.986766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.986809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.987070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.987113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.987347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.987388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.987611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.987653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.987879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.987922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.988149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.988192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.988418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.988461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.988686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.988729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.989004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.989048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.989216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.989259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.989562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.989605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.989858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.989902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.990072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.990116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.990344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.990386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.990629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.990671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.990853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.990897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.991063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.991106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.991274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.991316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.991665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.991710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.992013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.992057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.992359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.992400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.992704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.992747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.993047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.993091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.993337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.993379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.993617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.993660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.993966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.994011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.994178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.994220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.994447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.994489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.994730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.994773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.995042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.995087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.995264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.995306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.995552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.995595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.995871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.995916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.996143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.996186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.996383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.996658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.996700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.996986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.997030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.997246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.997295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.997579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.997621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.997946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.997990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.998333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.998655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.998698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.998946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.999217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.999259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.999440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.999483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:35.999779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:35.999821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.000151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.000194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.000495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.000538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.000843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.000887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.001106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.001149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.001373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.001416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.001678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.001721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.002000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.002044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.002304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.002347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.002568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.002610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.002907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.002952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.003250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.003293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.003586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.003796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.003848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.004170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.004213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.004488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.004530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.004840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.004884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.005120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.005162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.005340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.005383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.005635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.005678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.005977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.006020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.661 qpair failed and we were unable to recover it. 00:35:51.661 [2024-12-10 00:17:36.006300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.661 [2024-12-10 00:17:36.006343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.006598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.006640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.006956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.007001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.007258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.007475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.007517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.007790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.008144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.008187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.008463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.008505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.008752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.008795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.009047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.009090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.009317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.009363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.009655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.009956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.010001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.010196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.010431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.010476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.010807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.010863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.011180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.011222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.011524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.011567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.011878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.011929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.012262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.012307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.012444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.012487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.012780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.012836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.013016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.013340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.013381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.013672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.013714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.013950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.014204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.014246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.014601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.014899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.014942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.015209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.015252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.015557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.015599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.015867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.015910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.016182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.016224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.016539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.016582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.016879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.016922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.017100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.017143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.017417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.017459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.017758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.017800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.018045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.018243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.018285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.018592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.018634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.018942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.018985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.019212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.019254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.019425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.019467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.019700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.019742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.020053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.020352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.020394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.020686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.020728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.020975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.021018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.021262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.021304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.021571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.021613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.021893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.021943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.022156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.022198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.022412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.022455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.022731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.022773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.023069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.023121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.023427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.023469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.023797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.023855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.024169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.024212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.024491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.024534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.024762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.024805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.025105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.025147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.025433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.025474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.025773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.025816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.026090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.026133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.026405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.026448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.026685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.026728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.027033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.027078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.027295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.027338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.027637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.027680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.662 qpair failed and we were unable to recover it. 00:35:51.662 [2024-12-10 00:17:36.027891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.662 [2024-12-10 00:17:36.027934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.028147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.028190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.028338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.028381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.028653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.028695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.028928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.028971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.029210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.029262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.029588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.029632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.029951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.029995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.030308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.030351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.030706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.030992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.031036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.031221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.031263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.031598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.031641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.031865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.031909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.032186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.032228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.032484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.032527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.032685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.032726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.032943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.032986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.033210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.033252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.033557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.033600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.033884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.033928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.034214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.034263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.034567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.034610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.034818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.034894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.035146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.035190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.035490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.035532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.035843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.035887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.036075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.036322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.036363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.036642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.036685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.036930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.036975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.037293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.037334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.037559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.037602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.037836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.037880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.038233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.038465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.038507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.038740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.038783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.039092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.039136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.039460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.039504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.039731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.039773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.040092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.040135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.040439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.040482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.040782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.040855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.041156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.041199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.041428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.041471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.041684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.041726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.042026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.042072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.042349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.042392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.042713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.042756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.043018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.043062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.043210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.043250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.043471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.043513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.043740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.043782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.044000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.044043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.044220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.044262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.044596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.044639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.044974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.045017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.045191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.045234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.045464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.045506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.045800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.045855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.046167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.046209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.046436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.046484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.046797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.046859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.047135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.047408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.047450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.047597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.047639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.047965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.048009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.048194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.048236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.048493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.048535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.048758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.048800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.049056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.049098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.049412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.049457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.049724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.050019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.050062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.050429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.663 qpair failed and we were unable to recover it. 00:35:51.663 [2024-12-10 00:17:36.050714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.663 [2024-12-10 00:17:36.050757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.051069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.051112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.051339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.051381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.051625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.051668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.051922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.051966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.052142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.052184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.054053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.054121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.054408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.054454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.054761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.054803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.055113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.055387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.055430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.055667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.055711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.055972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.056016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.056268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.056311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.056553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.056596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.056860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.057128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.057171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.057389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.057433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.057754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.057797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.058001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.058044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.058327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.058640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.058684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.058987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.059029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.059253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.059297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.059674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.059720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.059929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.059973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.060150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.060200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.060421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.060465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.060780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.060854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.061098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.061142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.061441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.061483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.061706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.061749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.062002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.062047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.062221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.062265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.062493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.062798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.062853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.063163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.063507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.063549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.063792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.063849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.064170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.064215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.064522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.064568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.064821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.064885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.067213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.067287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.067572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.067618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.067919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.067964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.068189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.068233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.068572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.068614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.068893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.068936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.069202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.069244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.069556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.069598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.069833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.070178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.070220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.070517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.070560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.070948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.071188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.071232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.071450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.071492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.071794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.071849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.072145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.072187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.072415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.072458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.072763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.072804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.073119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.073162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.073409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.073452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.073752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.073795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.074044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.074087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.074288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.074566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.074608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.075063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.075432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.075474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.075740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.664 [2024-12-10 00:17:36.075783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.664 qpair failed and we were unable to recover it. 00:35:51.664 [2024-12-10 00:17:36.076138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.076226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.076554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.076601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.076891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.076936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.077173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.077424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.077467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.077689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.077732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.078022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.078067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.078319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.078361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.078601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.078643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.078876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.078920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.079235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.079573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.079615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.079837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.080044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.080087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.080360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.080402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.080646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.080689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.080947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.080993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.081216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.081258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.081464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.081506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.081802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.081855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.082078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.082120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.082412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.082454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.082655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.082699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.082964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.083015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.083178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.083221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.083520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.083563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.083832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.084115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.084157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.084382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.084425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.084719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.084761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.085074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.085117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.085422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.085464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.085684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.085727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.086002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.086045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.086278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.086320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.086550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.086592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.086757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.086799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.087092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.087136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.087373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.087416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.087718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.087760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.088083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.088128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.088353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.088395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.088716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.088758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.089054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.089097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.089332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.089673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.089715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.089989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.090033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.090196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.090239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.090488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.090530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.090688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.090729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.090947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.090991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.091314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.091610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.091653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.091870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.091914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.092216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.092259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.092554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.092892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.092935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.093231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.093582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.093625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.093865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.093909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.094068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.094358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.094401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.094720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.095001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.095408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.095494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.095748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.095796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.096053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.096098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.096379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.096421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.096724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.096768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.097086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.097132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.097357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.097399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.097558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.097601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.097886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.097930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.098153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.098196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.098569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.098612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.098970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.665 [2024-12-10 00:17:36.099015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.665 qpair failed and we were unable to recover it. 00:35:51.665 [2024-12-10 00:17:36.099262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.099305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.099527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.099579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.099866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.099909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.100160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.100203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.100523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.100566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.100902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.101131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.101174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.101513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.101556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.101851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.101896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.102143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.102185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.102396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.102439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.102743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.102786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.102979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.103022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.103292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.103335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.103642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.103685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.104012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.104055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.104279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.104322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.104890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.104934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.105207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.105250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.105596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.105847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.105892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.106189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.106231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.106551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.106593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.106802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.106863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.107129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.107170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.107411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.107455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.107756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.107799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.108045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.108089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.108367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.108409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.108672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.108714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.108953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.108997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.109248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.109290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.109511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.109553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.109720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.109762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.666 [2024-12-10 00:17:36.109974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.666 [2024-12-10 00:17:36.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.666 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.110292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.110335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.112173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.112240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.112451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.112497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.112852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.113030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.113073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.113298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.113584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.113627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.113803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.113859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.114069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.114351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.114394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.114611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.114653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.114960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.115006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.115273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.115640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.115956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.116000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.116145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.116188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.116479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.116521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.116820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.116875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.117112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.117155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.117394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.117438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.941 qpair failed and we were unable to recover it. 00:35:51.941 [2024-12-10 00:17:36.117668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.941 [2024-12-10 00:17:36.117711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.118001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.118045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.118329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.118373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.118532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.118573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.118793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.118849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.119074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.119117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.119296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.119338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.119563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.119606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.119816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.119877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.120101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.120471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.120514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.120721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.120764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.120994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.121261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.121303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.121580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.121622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.121874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.121917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.122146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.122188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.122353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.122397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.122692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.124641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.124978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.125025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.125221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.125263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.125511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.125554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.125721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.125762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.125993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.126037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.126193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.126243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.126459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.126501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.126804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.126858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.127033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.127075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.127346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.127388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.127544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.127586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.127909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.127955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.128264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.128307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.128636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.128677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.129018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.129328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.129371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.129689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.129732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.130025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.130296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.942 qpair failed and we were unable to recover it. 00:35:51.942 [2024-12-10 00:17:36.130578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.942 [2024-12-10 00:17:36.130619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.130960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.131003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.131303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.131345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.131585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.131626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.131947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.131991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.132269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.132311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.132635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.132677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.132948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.132991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.133169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.133211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.133432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.133474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.133795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.133847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.134096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.134138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.134450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.134492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.134816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.134875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.135101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.135313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.135356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.135545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.135588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.135809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.135881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.136145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.136441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.136483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.136756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.137108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.137150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.137359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.137401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.137646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.137688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.137954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.137998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.138247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.138289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.138589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.138631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.138948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.138991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.139294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.139336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.139570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.139613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.139765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.139807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.140043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.140087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.140309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.140351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.140648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.140690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.140971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.141014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.141162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.141204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.141496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.141538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.141842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.141885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.142127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.943 [2024-12-10 00:17:36.142170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.943 qpair failed and we were unable to recover it. 00:35:51.943 [2024-12-10 00:17:36.142506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.142549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.142839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.143178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.143221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.143372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.143414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.143633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.143674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.143953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.143997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.144251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.144294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.144575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.144791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.144848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.145160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.145202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.145362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.145404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.145651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.145693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.146040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.146086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.146396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.146440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.146666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.146714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.146875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.146918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.147164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.147207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.147488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.147532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.147841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.147885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.148225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.148267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.148519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.148561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.148847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.148890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.149189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.149231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.149528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.149570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.149796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.150180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.150454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.150496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.150788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.150842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.151079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.151122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.151339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.151381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.151601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.151643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.151961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.152004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.152281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.152325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.152636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.152973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.153017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.153315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.153357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.153573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.153616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.153914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.153958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.154178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.944 [2024-12-10 00:17:36.154220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.944 qpair failed and we were unable to recover it. 00:35:51.944 [2024-12-10 00:17:36.154448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.154490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.154776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.154818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.155059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.155102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.155251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.155293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.155596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.155638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.155869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.155914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.156247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.156289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.156601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.156642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.156960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.157004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.157231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.157274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.157549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.157590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.157866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.157911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.158132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.158174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.158538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.158581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.158882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.158925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.159199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.159246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.159565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.159606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.159897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.159942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.160239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.160281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.160556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.160598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.160833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.160878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.161101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.161143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.161510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.161734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.161776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.162021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.162065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.162308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.162350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.162552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.162594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.162893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.162937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.163180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.163223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.163439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.163481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.163805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.163881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.164107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.945 [2024-12-10 00:17:36.164150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.945 qpair failed and we were unable to recover it. 00:35:51.945 [2024-12-10 00:17:36.164367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.164408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.164728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.164771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.165082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.165126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.165401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.165443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.165745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.165788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.166031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.166074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.166345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.166387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.166691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.166733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.166929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.166972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.167201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.167574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.167618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.167949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.167993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.168221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.168263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.168574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.168616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.168848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.168890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.169179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.169221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.169430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.169472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.169743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.169785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.170010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.170053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.170330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.170371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.170663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.170704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.170956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.171001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.171274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.171316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.171659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.172010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.172054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.172290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.172332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.172711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.172753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.172995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.173039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.173320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.173362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.173589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.173631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.173949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.173993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.174222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.174265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.174509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.174551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.174836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.174880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.175089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.175131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.175407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.175450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.175754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.175798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.176325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.946 [2024-12-10 00:17:36.176368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.946 qpair failed and we were unable to recover it. 00:35:51.946 [2024-12-10 00:17:36.176682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.176725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.177004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.177048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.177275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.177318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.177645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.177962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.178006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.178169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.178212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.178435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.178478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.178697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.178739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.178977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.179022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.179183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.179225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.179446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.179489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.179781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.179849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.180075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.180349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.180392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.180649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.180691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.180915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.181204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.181247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.181423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.181466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.181738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.181780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.181969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.182012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.182315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.182357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.182570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.182612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.182895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.182938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.183240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.183283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.183531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.183586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.183877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.183921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.184144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.184186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.184508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.184550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.184883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.184927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.185226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.185268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.185485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.185527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.185808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.185862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.186135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.186178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.186476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.186518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.186812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.186865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.187114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.187156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.187450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.187491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.187727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.187769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.188080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.947 [2024-12-10 00:17:36.188123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.947 qpair failed and we were unable to recover it. 00:35:51.947 [2024-12-10 00:17:36.188437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.188478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.188776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.188818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.189117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.189160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.189523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.189846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.189890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.190185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.190227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.190378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.190420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.190709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.190751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.190990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.191033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.191269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.191311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.191585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.191628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.191779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.191820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.192071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.192114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.192406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.192448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.192676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.192719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.192955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.193000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.193302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.193344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.193666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.193708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.194044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.194087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.194337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.194380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.194677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.194718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.194953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.195150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.195192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.195510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.195552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.195862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.195906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.196220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.196484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.196526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.196734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.196776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.197133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.197427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.197470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.197779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.198132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.198174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.198409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.198451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.198760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.198802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.199059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.199101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.199394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.199437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.199749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.199792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.200116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.200159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.948 qpair failed and we were unable to recover it. 00:35:51.948 [2024-12-10 00:17:36.200428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.948 [2024-12-10 00:17:36.200470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.200726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.200768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.201082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.201127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.201346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.201388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.201596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.201639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.201956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.202251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.202294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.202590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.202632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.202961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.203005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.203228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.203271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.203594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.203636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.203867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.203910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.204074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.204116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.204346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.204388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.204621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.204664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.204964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.205008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.205613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.205655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.205871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.205915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.206191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.206234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.206530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.206572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.206867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.206910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.207183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.207226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.207519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.207561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.207865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.207909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.208128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.208170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.208462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.208504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.208787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.208847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.209071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.209113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.209417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.209684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.209727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.210044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.210384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.210426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.210634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.210676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.211009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.211299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.211342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.211613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.211655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.211994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.212290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.212332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.949 qpair failed and we were unable to recover it. 00:35:51.949 [2024-12-10 00:17:36.212478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.949 [2024-12-10 00:17:36.212521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.212821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.212876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.213180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.213224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.213429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.213471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.213794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.213848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.214172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.214447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.214489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.214804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.214858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.215136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.215178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.215469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.215512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.215808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.215865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.216085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.216127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.216364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.216405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.216611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.216653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.216787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.216841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.217098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.217140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.217414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.217456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.217753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.217795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.218107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.218149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.218362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.218402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.218688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.218730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.219021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.219697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.219740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.220053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.220097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.220371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.220687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.220729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.220980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.221023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.221319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.221367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.221573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.221615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.221891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.221935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.222231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.222273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.222497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.222539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.222772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.222813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.950 qpair failed and we were unable to recover it. 00:35:51.950 [2024-12-10 00:17:36.223075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.950 [2024-12-10 00:17:36.223117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.223421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.223463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.223766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.223808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.224135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.224395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.224437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.224621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.224913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.224957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.225173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.225216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.225430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.225472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.225692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.225734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.226006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.226050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.226307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.226349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.226657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.226699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.226996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.227040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.227314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.227357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.227656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.227698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.227902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.227945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.228173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.228216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.228513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.228555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.228852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.228897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.229177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.229220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.229433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.229475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.229794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.229846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.230146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.230188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.230411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.230452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.230751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.230793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.231064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.231107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.231398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.231440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.231733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.232015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.232058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.232306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.232348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.232571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.232613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.232846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.232891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.233111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.233152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.233284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.233332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.233555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.233598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.233895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.233940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.234196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.234240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 595551 Killed "${NVMF_APP[@]}" "$@" 00:35:51.951 [2024-12-10 00:17:36.234548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.951 [2024-12-10 00:17:36.234592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.951 qpair failed and we were unable to recover it. 00:35:51.951 [2024-12-10 00:17:36.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.234931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.235151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:51.952 [2024-12-10 00:17:36.235410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.235609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.235883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.235927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.952 [2024-12-10 00:17:36.236223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.236267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.952 [2024-12-10 00:17:36.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.236609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.952 [2024-12-10 00:17:36.236887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.236933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.237230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.237273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.237519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.237562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.237886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.237930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.238156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.238199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.238505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.238547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.238848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.238891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.239135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.239177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.239502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.239544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.239785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.239837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.240062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.240326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.240369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.240533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.240582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.240866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.240909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.241193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.241235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.241541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.241582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.241946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.242187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.242228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.242521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.242563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.242870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.242915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.243113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.243421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.952 [2024-12-10 00:17:36.243463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.952 qpair failed and we were unable to recover it. 00:35:51.952 [2024-12-10 00:17:36.243736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.243779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.244079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.244122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.244337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.244379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.244665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.244995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.245039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=596380 00:35:51.953 [2024-12-10 00:17:36.245335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.245382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.245619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 596380 00:35:51.953 [2024-12-10 00:17:36.245665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:51.953 [2024-12-10 00:17:36.245967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 596380 ']' 00:35:51.953 [2024-12-10 00:17:36.246252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.246295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.953 [2024-12-10 00:17:36.246629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.953 [2024-12-10 00:17:36.246673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.953 [2024-12-10 00:17:36.246998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.247042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.953 [2024-12-10 00:17:36.247270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.247315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 00:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.953 [2024-12-10 00:17:36.247558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.247609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.247859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.247903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.248270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.248536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.248578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.248856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.248900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.249156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.249197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.249439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.249479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.249782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.250183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.250227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.250522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.250565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.250799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.250855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.251144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.251320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.251366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.251588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.251630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.251875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.251920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.252086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.252139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.252368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.252412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.252733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.252776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.953 qpair failed and we were unable to recover it. 00:35:51.953 [2024-12-10 00:17:36.253006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.953 [2024-12-10 00:17:36.253052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.253267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.253310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.253664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.253709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.253992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.254035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.254277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.254319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.254570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.254611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.254907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.254953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.255102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.255145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.255317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.255360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.255603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.255648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.255888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.255932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.256210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.256253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.256571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.256911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.256955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.257248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.257540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.257582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.257801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.257858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.258045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.258087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.258310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.258353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.258628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.258671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.258910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.258955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.259223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.259265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.259554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.259604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.259878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.259923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.260189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.260232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.260510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.260808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.260866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.261112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.261154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.261384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.261428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.261658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.261700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.261988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.262030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.262347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.262389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.262715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.954 [2024-12-10 00:17:36.262757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.954 qpair failed and we were unable to recover it. 00:35:51.954 [2024-12-10 00:17:36.262975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.263018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.263173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.263215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.263502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.263850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.263897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.264056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.264098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.264305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.264348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.264607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.264756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.264798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.265022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.265065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.265355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.265652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.265971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.266015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.266299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.266341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.266592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.266634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.266770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.266811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.267098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.267142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.267377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.267420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.267648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.267690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.267959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.268004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.268295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.268337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.268563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.268605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.268889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.268934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.269099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.269142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.269420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.269464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.269720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.269762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.269962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.270005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.270221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.270263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.270509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.270552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.270855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.270898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.955 qpair failed and we were unable to recover it. 00:35:51.955 [2024-12-10 00:17:36.271116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.955 [2024-12-10 00:17:36.271164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.271412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.271453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.271773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.272037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.272082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.272290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.272333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.272633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.272674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.273017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.273260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.273302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.273634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.273676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.273919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.273963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.274196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.274239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.274556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.274598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.274845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.274888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.275060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.275102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.275271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.275314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.275533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.275574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.275796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.275865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.276216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.276259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.276471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.276516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.276655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.276699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.276947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.276991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.277266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.277447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.277490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.277701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.277756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.277940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.277982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.278132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.278174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.278467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.278509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.278734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.278777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.279077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.279122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.279349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.279391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.279697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.279742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.280061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.280108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.956 [2024-12-10 00:17:36.280407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.956 [2024-12-10 00:17:36.280449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.956 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.280598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.280640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.280876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.280921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.281072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.281114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.281752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.281794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.282004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.282046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.282262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.282304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.282520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.282569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.282916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.283133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.283175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.283508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.283551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.283802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.283858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.284156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.284198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.284369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.284654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.284696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.284930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.284972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.285247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.285290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.285531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.285573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.285778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.285820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.286060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.286103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.286335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.286676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.286718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.286870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.286913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.287137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.287179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.287328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.287369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.287640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.287682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.287887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.287931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.288179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.288425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.288467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.288626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.288668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.288841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.288884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.289026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.289068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.289342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.289589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.957 [2024-12-10 00:17:36.289631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.957 qpair failed and we were unable to recover it. 00:35:51.957 [2024-12-10 00:17:36.289914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.289957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.290129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.290170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.290379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.290428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.290588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.290630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.290837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.291107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.291149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.291418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.291604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.291646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.291799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.291854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.292088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.292130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.292333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.292375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.292596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.292638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.292938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.292982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.293207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.293254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.293473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.293514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.293678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.293719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.293921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.293964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.294167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.294209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.294499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.294821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.294875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.295169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.295212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.295360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.295401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.295677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.295720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.295996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.296039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.296343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.296386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.296616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.296871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.296915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.297089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.297132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.297219] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:35:51.958 [2024-12-10 00:17:36.297277] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.958 [2024-12-10 00:17:36.297345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.297386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.297599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.297638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.297859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.297900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.298208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.298409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.958 [2024-12-10 00:17:36.298744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.958 [2024-12-10 00:17:36.298787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.958 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.299113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.299155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.299310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.299351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.299626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.299668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.300249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.300291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.300475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.300518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.300662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.300703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.300909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.300952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.301087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.301130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.301409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.301451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.301722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.301764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.302044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.302087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.302287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.302330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.302476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.302518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.302656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.302901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.302945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.303186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.303227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.303393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.303434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.303661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.303709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.304005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.304048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.304193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.304234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.304386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.304427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.304633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.304951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.305144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.305186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.305368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.305672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.305714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.305874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.305918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.306089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.306134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.959 [2024-12-10 00:17:36.306432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.959 [2024-12-10 00:17:36.306474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.959 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.306760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.306802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.306975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.307026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.307340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.307563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.307604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.307910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.307957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.308115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.308157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.308424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.308466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.308676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.308718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.308921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.308965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.309261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.309302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.309444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.309487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.309757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.309801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.310021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.310063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.310404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.310446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.310647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.310689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.310914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.310958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.311110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.311151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.311378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.311420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.311654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.311696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.311977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.312021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.312234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.312289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.312509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.312552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.312796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.312851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.313089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.313137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.313355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.313647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.313701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.313870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.313913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.314205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.314251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.314586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.314838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.314885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.960 [2024-12-10 00:17:36.315156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.960 [2024-12-10 00:17:36.315199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.960 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.315346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.315618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.315660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.315872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.315915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.316115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.316157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.316358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.316400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.316632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.316673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.316874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.316917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.317185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.317226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.317370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.317411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.317576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.317618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.317935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.317978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.318208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.318250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.318493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.318535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.318756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.319094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.319135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.319426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.319757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.319800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.320079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.320130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.320362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.320404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.320560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.320602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.320757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.321048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.321089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.321269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.321311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.321598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.321639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.321871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.321914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.322203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.322245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.322513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.322555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.322764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.322804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.323038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.323080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.323317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.323505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.323547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.323874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.324183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.324382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.324423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.324663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.324704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.324914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.324957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.325228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.325270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.325431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.961 [2024-12-10 00:17:36.325479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.961 qpair failed and we were unable to recover it. 00:35:51.961 [2024-12-10 00:17:36.325766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.325809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.326050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.326092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.326326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.326368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.326595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.326636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.326855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.326898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.327116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.327292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.327334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.327545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.327586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.327816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.327886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.328155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.328196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.328422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.328464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.328668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.328709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.328942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.328985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.329231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.329273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.329485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.329526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.329792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.329844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.330114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.330157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.330374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.330416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.330696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.330736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.330985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.331028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.331246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.331287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.331500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.331542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.331770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.332063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.332105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.332385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.332426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.332646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.332687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.332987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.333031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.333229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.333270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.333472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.333514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.333806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.333858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.334141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.334345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.334386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.334598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.334639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.334843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.334887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.335084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.335125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.335391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.335433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.335651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.335692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.335871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.335915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.336210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.962 [2024-12-10 00:17:36.336251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.962 qpair failed and we were unable to recover it. 00:35:51.962 [2024-12-10 00:17:36.336559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.336606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.336837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.336881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.337100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.337140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.337394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.337619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.337660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.337959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.338002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.338156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.338198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.338345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.338386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.338636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.338853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.338896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.339116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.339157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.339422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.339463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.339699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.339740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.339942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.339985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.340204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.340245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.340478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.340519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.340805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.340856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.341062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.341104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.341344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.341385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.341543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.341583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.341804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.341859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.342076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.342117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.342322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.342363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.342494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.342535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.342835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.342878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.343094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.343135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.343270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.343311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.343580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.343622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.343908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.344046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.344087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.344299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.344340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.344629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.344670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.344973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.345285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.345326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.345527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.345568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.345797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.345847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.346029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.346321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.346362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.346651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.963 [2024-12-10 00:17:36.346692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.963 qpair failed and we were unable to recover it. 00:35:51.963 [2024-12-10 00:17:36.346979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.347021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.347234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.347281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.347611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.347843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.347887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.348038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.348079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.348359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.348400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.348570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.348709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.348750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.348982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.349025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.349309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.349349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.349636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.349677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.349910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.349953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.350158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.350199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.350482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.350523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.350835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.350877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.351106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.351147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.351379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.351419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.351705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.351746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.351908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.351950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.352101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.352142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.352341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.352382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.352603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.352644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.352851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.352894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.353249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.353517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.353558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.353706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.353892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.353936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.354125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.354166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.354438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.354522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.964 [2024-12-10 00:17:36.354756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.964 qpair failed and we were unable to recover it. 00:35:51.964 [2024-12-10 00:17:36.354975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.355019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.355155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.355196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.355482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.355524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.355790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.355844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.356058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.356099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.356383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.356424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.356685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.356726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.357007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.357051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.357258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.357298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.357560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.357601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.357801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.357851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.358146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.358187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.358417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.358458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.358664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.358705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.358945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.359178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.359219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.359420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.359461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.359749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.359789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.359959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.360002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.360306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.360569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.360609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.360760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.360801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.361095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.361137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.361424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.361464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.361740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.361982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.362030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.362261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.362302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.362584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.362625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.362868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.362911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.363120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.363161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.363300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.363341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.363616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.363657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.363874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.965 [2024-12-10 00:17:36.363921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.965 qpair failed and we were unable to recover it. 00:35:51.965 [2024-12-10 00:17:36.364144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.364184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.364412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.364453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.364697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.365025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.365067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.365346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.365387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.365588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.365629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.365935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.365977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.366193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.366234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.366440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.366480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.366758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.366799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.367017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.367058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.367264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.367304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.367597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.367745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.367786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.368068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.368110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.368321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.368579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.368620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.368849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.368892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.369027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.369069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.369289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.369336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.369639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.369878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.370053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.370093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.370293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.370334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.370528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.370570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.370804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.371071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.371294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.371335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.371548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.371589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.371762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.371802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.372106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.372342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.372496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.372536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.372741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.372783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.373106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.373310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.966 [2024-12-10 00:17:36.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.966 qpair failed and we were unable to recover it. 00:35:51.966 [2024-12-10 00:17:36.373555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.373596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.373837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.373881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.374091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.374347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.374388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.374591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.374630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.374936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.374978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.375190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.375231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.375361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.375402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.375626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.375667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.375879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.375921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.376113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.376153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.376414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.376461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.376602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.376644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.376924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.376966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.377117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.377159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.377359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.377401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.377712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.377752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.377997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.378238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.378278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.378438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.378478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.378677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.378718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.378981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.379024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.379177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.379217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.379360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.379401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.379687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.379728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.379944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.379987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.380244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.380285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.380490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.380530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.380842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.380884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.381119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.381160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.381460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.381706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.381746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.381988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.382030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.382320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.382363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.967 qpair failed and we were unable to recover it. 00:35:51.967 [2024-12-10 00:17:36.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.967 [2024-12-10 00:17:36.382533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.382847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.382890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.383028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.383069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.383346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.383387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.383668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.383708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.383918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.383961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.384220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.384260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.384470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.384511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.384795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.384848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.385055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.385096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.385311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.385352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.385557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.385598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.385804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.385852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.386050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.386092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.386246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.386288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.386569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.386609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.386895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.387148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.387190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.387423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.387464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.387672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.387713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.387872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.387914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.388129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.388170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.388364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.388405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.388683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.388724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.389004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.389046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.389251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.389292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.389430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.389470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.389663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.389704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.389898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.390203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.390244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.390447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.390488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.390692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.390733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.390963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.391005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.391210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.391251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.391461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.391502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.391724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.391765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.392053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.968 [2024-12-10 00:17:36.392095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.968 qpair failed and we were unable to recover it. 00:35:51.968 [2024-12-10 00:17:36.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.392346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.392630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.392670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.392879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.392921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.393211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.393251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.393379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.393419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.393651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.393692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.394000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.394041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.394324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.394365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.394568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.394618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.394851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.394893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.395123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.395163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.395361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.395401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.395554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.395594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.395836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.395877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.396027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.396067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.396396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.396437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.396643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.396684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.396883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.396924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.397242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.397283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.397494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.397535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.397833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.397874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.398014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.398055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.398285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.398529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.398570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.398844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.398886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:51.969 [2024-12-10 00:17:36.399110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.969 [2024-12-10 00:17:36.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:51.969 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.399426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.399468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.399686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.399727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.399925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.399969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.400135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.400175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.400337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.400378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.400517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.400558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.400858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.400901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.400969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:52.258 [2024-12-10 00:17:36.401104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.401145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.401291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.401331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.401657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.401698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.401998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.402040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.402293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.402497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.402539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.402745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.402786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.402947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.402988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.403199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.403240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.403498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.258 [2024-12-10 00:17:36.403539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.258 qpair failed and we were unable to recover it. 00:35:52.258 [2024-12-10 00:17:36.403740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.403780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.404047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.404127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.404297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.404343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.404485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.404526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.404736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.405008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.405051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.405321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.405570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.405610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.405761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.405801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.406045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.406086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.406291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.406331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.406606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.406647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.406902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.407110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.407151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.407405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.407445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.407662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.407702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.407918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.407960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.408175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.408216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.408364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.408405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.408606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.408686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.408868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.408916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.409074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.409117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.409392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.409433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.409695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.409736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.409932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.410263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.410304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.410503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.410544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.410764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.410804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.411025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.411067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.411279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.411319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.411580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.411621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.411835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.411878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.412072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.412123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.412332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.412373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.412593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.412634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.412893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.412936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.413079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.413332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.413373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.413582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.413623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.413771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.413811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.414028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.414070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.414201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.414242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.414371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.414413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.414603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.414643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.414843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.414885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.415088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.415128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.415337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.415379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.415525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.415566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.259 qpair failed and we were unable to recover it. 00:35:52.259 [2024-12-10 00:17:36.415760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.259 [2024-12-10 00:17:36.415800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.416005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.416046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.416243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.416285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.416567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.416607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.416814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.416868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.416998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.417233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.417274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.417486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.417526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.417726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.417766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.417948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.418099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.418139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.418422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.418468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.418727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.418769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.418995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.419037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.419320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.419360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.419642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.419683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.419839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.419881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.420008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.420049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.420272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.420312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.420530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.420570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.420726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.420766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.421014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.421056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.421338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.421378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.421678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.421886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.421929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.422197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.422240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.422455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.422496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.422778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.422820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.422957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.422999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.423196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.423237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.423464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.423505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.423774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.423815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.424083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.424125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.424271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.424311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.424440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.424480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.424689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.424730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.424936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.424978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.425235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.425276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.425470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.425511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.425796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.425847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.426052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.426092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.426321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.426361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.426640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.426681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.426819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.426873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.427078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.427120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.427318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.427360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.427554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.427595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.427728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.260 [2024-12-10 00:17:36.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.260 qpair failed and we were unable to recover it. 00:35:52.260 [2024-12-10 00:17:36.427972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.428014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.428228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.428268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.428471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.428511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.428661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.428709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.428970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.429014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.429160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.429201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.429327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.429368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.429628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.429668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.429862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.430131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.430172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.430471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.430679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.430719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.430931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.430974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.431114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.431155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.431353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.431393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.431539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.431580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.431843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.431885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.432035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.432076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.432292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.432334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.432533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.432574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.432766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.432806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.432988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.433030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.433289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.433481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.433521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.433703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.433982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.434025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.434313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.434354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.434498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.434538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.434751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.434792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.435005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.435046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.435256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.435297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.435434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.435474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.435702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.435742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.436029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.436325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.436365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.436572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.436613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.436760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.436800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.437061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.437102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.437396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.437437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.437708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.437749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.437942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.438230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.438273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.438445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.438708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.438756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.439053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.439098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.439360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.439402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.439545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.439629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.261 [2024-12-10 00:17:36.439658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.261 [2024-12-10 00:17:36.439668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.261 [2024-12-10 00:17:36.439677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.261 [2024-12-10 00:17:36.439684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.261 [2024-12-10 00:17:36.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.439841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.440113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.440152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.261 [2024-12-10 00:17:36.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.261 [2024-12-10 00:17:36.440428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.261 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.440618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.440660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.440853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.440896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.441088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.441128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.441271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.441357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:52.262 [2024-12-10 00:17:36.441448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:52.262 [2024-12-10 00:17:36.441558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:52.262 [2024-12-10 00:17:36.441595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.441636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.441560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:52.262 [2024-12-10 00:17:36.441864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.441905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.442044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.442083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.442330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.442594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.442775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.442815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.443028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.443070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.443312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.443509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.443549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.443681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.443722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.443983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.444026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.444168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.444209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.444441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.444642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.444684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.444942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.445156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.445198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.445407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.445448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.445685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.445726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.445862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.445904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.446120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.446160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.446358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.446622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.446662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.446859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.446900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.447160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.447200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.447392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.447433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.447563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.447604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.447933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.448018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.448201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.448530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.448573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.448779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.448820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.449003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.449044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.449178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.449220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.449498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.449540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.449732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.262 [2024-12-10 00:17:36.449773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.262 qpair failed and we were unable to recover it. 00:35:52.262 [2024-12-10 00:17:36.449942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.449984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.450145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.450186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.450372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.450412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.450541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.450582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.450712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.450753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.450961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.451015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.451223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.451265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.451492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.451653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.451694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.451978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.452021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.452229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.452270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.452531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.452572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.452839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.452881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.453150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.453190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.453321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.453362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.453621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.453662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.453908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.453950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.454080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.454312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.454356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.454599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.454640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.454845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.454888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.455081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.455122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.455390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.455432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.455652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.455892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.455935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.456117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.456157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.456283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.456451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.456636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.456677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.456979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.457107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.457149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.457343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.457384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.457682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.457729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.457976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.458018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.458286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.458327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.458536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.458576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.458728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.458769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.458971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.459012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.459222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.459262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.459546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.459587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.459782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.459832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.460115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.460156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.460379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.460701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.460742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.460964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.461006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.461234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.461280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.461483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.461773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.462039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.462082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.462325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.462367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.462603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.263 qpair failed and we were unable to recover it. 00:35:52.263 [2024-12-10 00:17:36.462890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.263 [2024-12-10 00:17:36.462933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.463193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.463236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.463426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.463607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.463649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.463843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.463888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.464172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.464217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.464421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.464466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.464697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.464738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.464974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.465018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.465306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.465350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.465503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.465544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.465803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.465855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.466116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.466159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.466356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.466397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.466607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.466649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.466912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.466956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.467173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.467214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.467458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.467609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.467651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.467915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.467960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.468101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.468142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.468519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.468742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.468784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.469054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.469095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.469293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.469334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.469549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.469591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.469875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.470037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.470079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.470281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.470322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.470523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.470564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.470848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.470893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.471106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.471148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.471419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.471462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.471767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.471995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.472048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.472279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.472320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.472534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.472575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.472782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.472835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.472994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.473035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.473228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.473270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.473496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.473539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.473743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.473786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.474034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.474080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.474389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.474437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.474643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.474690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.474839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.264 [2024-12-10 00:17:36.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.264 qpair failed and we were unable to recover it. 00:35:52.264 [2024-12-10 00:17:36.475032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.475076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.475272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.475315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.475560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.475608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.475812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.476035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.476079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.476342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.476386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.476592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.476634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.476838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.476881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.477085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.477127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.477295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.477337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.477549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.477591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.477875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.477917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.478137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.478178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.478463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.478723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.478764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.479037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.479101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.479356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.479396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.479596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.479638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.479796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.479850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.479999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.480041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.480236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.480277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.480473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.480513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.480715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.480756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.480892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.481065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.481105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.481240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.481281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.481481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.481522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.481769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.482000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.482057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.482321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.482361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.482487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.482527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.482653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.482694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.482951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.482993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.483254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.483295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.483509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.483550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.483774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.483815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.484024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.484064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.484346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.484388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.484647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.484960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.485230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.485271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.485483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.485527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.485741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.485786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.486002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.486045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.486307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.486354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.265 qpair failed and we were unable to recover it. 00:35:52.265 [2024-12-10 00:17:36.486518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.265 [2024-12-10 00:17:36.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.486820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.486871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.487071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.487116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.487278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.487319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.487511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.487552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.487701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.487743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.487900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.488154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.488196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.488478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.488521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.488734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.488778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.489035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.489096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.489328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.489369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.489587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.489628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.489890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.490102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.490143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.490341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.490381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.490571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.490613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.490862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.490905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.491116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.491157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.491355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.491396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.491523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.491564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.491701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.491742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.492017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.492059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.492340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.492381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.492582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.492623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.492752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.492792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.493057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.493098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.493346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.493538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.493579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.493706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.493747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.493990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.494035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.494167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.494210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.494354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.494396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.494958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.495001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.495201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.495246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.495544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.495588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.495858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.495908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.496183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.496439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.496481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.496765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.497031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.497074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.497213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.497256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.497516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.497558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.497705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.497745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.497960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.498003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.498208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.498250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.498407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.498448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.498664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.498707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.498952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.499162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.499203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.499394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.499434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.499628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.499668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.499906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.499948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.500122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.500163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.500353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.500394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.500649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.500693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.500892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.500933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.501214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.501255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.501458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.501499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.501726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.501766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.501976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.502019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.502154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.502194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.502413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.502454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.502744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.502785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.502933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.502974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.503167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.503206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.503465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.503507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.503767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.503808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.504107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.504148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.266 qpair failed and we were unable to recover it. 00:35:52.266 [2024-12-10 00:17:36.504424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.266 [2024-12-10 00:17:36.504465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.504730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.504771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.504937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.504978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.505189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.505229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.505526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.505774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.505813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.506037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.506078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.506237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.506278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.506471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.506511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.506719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.506760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.507019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.507061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.507208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.507248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.507372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.507413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.507610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.507651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.507869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.507911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.508100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.508141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.508368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.508409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.508663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.508704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.508842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.508884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.509091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.509139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.509268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.509309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.509570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.509611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.509905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.509947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.510223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.510264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.510425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.510465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.510723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.510764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.510976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.511018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.511281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.511321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.511601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.511642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.511901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.511942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.512151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.512192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.512346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.512386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.512641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.512682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.512905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.512948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.513188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.513229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.513422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.513463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.513766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.513806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.514033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.514074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.514270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.514311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.514571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.514612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.514752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.514793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.514941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.514983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.515272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.515313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.515619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.515660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.515947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.515989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.516204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.516245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.516481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.516521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.516781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.516832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.517038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.517079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.517293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.517333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.517473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.517514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.517715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.517756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.517974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.518016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.518162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.518203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.518486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.518527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.518735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.518776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.518944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.518985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.519274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.519314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.519574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.519615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.519896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.519952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.520158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.520198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.520412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.520453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.520656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.520698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.520899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.520940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.521141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.521182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.521377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.521418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.521563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.521840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.521886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.522083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.522124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.522358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.522398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.522609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.522650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.522786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.267 [2024-12-10 00:17:36.522834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.267 qpair failed and we were unable to recover it. 00:35:52.267 [2024-12-10 00:17:36.523093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.523133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.523284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.523325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.523524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.523566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.523757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.523798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.524017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.524058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.524267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.524308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.524564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.524605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.524814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.525154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.525194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.525469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.525510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.525730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.525979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.526021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.526224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.526478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.526670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.526711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.526937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.526979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.527183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.527224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.527482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.527522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.527801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.527851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.528047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.528089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.528294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.528334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.528536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.528577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.528772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.528814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.529088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.529129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.529323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.529364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.529644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.529685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.529908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.529950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.530173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.530220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.530506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.530547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.530746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.530787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.531083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.531267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.531308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.531571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.531611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.531863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.531905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.532172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.532212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.532502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.532543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.532744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.532784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.532938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.532978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.533176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.533217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.533526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.533567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.533759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.533800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.534028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.534070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.534291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.534333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.534613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.534654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.534793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.534863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.535062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.535388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.535429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.535690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.535730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.535875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.535918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.536218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.536259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.536477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.536517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.536774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.536814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.536960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.537000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.537260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.537301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.537587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.537628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.537912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.537954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.538158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.538198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.538345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.538386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.538644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.538685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.538951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.538993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.539194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.539235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.539509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.539550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.539714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.539754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.539985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.540028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.540174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.540214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.540496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.540537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.268 [2024-12-10 00:17:36.540743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.268 [2024-12-10 00:17:36.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.268 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.541072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.541119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.541355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.541396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.541621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.541873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.541915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.542207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.542248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.542506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.542552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.542777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.542818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.543023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.543064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.543342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.543383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.543657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.543944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.543989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.544117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.544158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.544423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.544667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.544708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.544987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.545029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.545168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.545208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.545407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.545448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.545686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.545726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.545933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.545975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.546171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.546212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.546420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.546462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.546695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.546735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.546939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.547133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.547174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.547309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.547350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.547583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.547624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.547885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.547927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.548218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.548259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.548465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.548506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.548699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.548869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.548911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.549122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.549163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.549377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.549418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.549573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.549613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.549761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.549802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.550020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.550061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.550322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.550364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.550559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.550600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.550838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.550880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.551132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.551172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.551382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.551428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.551713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.551755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.551993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.552318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.552359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.552587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.552627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.552886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.552929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.553231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.553272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.553431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.553472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.553674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.553715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.553974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.554016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.554276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.554317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.554519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.554560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.554790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.554850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.555094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.555135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.555282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.555323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.555509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.555762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.556057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.556304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.556603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.556644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.556852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.556895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.557103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.557143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.557364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.557405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.557667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.557708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.557852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.557893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.558238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.558368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.558409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.558626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.558668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.558888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.558931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.559190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.559230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.559376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.559416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.559726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.559768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.560071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.269 [2024-12-10 00:17:36.560112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.269 qpair failed and we were unable to recover it. 00:35:52.269 [2024-12-10 00:17:36.560309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.560350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.560502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.560543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.560801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.561027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.561069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.561354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.561394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.561589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.561630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.561916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.561959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.562257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.562304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.562511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.562756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.562797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.562948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.563214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.563255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.563447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.563489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.563683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.563724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.563936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.563978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.564180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.564222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.564429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.564470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.564661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.564866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.564907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.565100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.565141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.565420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.565461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.565618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.565660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.565918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.565959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.566171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.566211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.566421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.566463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.566753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.567109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.567150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.567357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.567398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.567663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.567704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.567859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.567900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.568051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.568092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.568313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.568354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.568636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.568677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.568913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.568956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.569154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.569196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.569481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.569522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.569672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.569713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.569941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.569984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.570243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.570283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.570489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.570530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.570661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.570702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.570900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.570941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.571188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.571229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.571486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.571527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.571731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.571771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.571991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.572033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.572240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.572281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.572437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.572489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.572748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.572789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.573109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.573150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.573411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.573452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.573661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.573703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.573902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.573944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.574163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.574203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.574461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.574502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.574789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.574846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.575048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.575088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.575349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.575390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.575537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.575578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.575891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.575933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.576196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.270 [2024-12-10 00:17:36.576237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.270 qpair failed and we were unable to recover it. 00:35:52.270 [2024-12-10 00:17:36.576510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.576551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.576783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.576834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.577065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.577106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.577308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.577349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.577546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.577586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.577789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.577843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.578054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.578094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.578306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.578347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.578658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.578700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.578850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.578892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.579035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.579075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.579199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.579240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.579566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.579606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.579871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.579943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.580266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.580348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.580632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.580676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.580853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.580897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.581110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.581152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.581302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.581343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.581628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.581669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.581919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.581962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.582174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.582215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.582437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.582763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.582804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.583022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.583063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.583322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.583363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.583561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.583900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.583943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.584173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.584214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.584423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.584465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.584679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.584720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.584932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.584975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.585213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.585255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.585482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.585522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.585736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.585777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.586166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.586207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.586367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.586408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.586568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.586608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.586758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.586798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.587021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.587063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.587323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.587364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.587570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.587611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.587770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.587811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.588055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.588374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.588414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.588572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.588613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.588752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.588792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.589073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.589115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.589319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.589360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.589640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.589680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.589895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.590141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.590182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.590467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.590513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.590802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.590856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.590987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.591028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.591314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.591357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.591571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.591613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.591821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.592096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.592137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.592341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.592382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.592693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.593005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.593203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.593243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.593448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.593749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.593791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.594065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.594112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.594323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.594363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.594520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.594561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.594845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.594886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.595121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.595162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.595305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.595345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.271 [2024-12-10 00:17:36.595568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.271 [2024-12-10 00:17:36.595610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.271 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.595755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.595796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.595977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.596018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.596210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.596251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.596400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.596442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.596701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.596742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.596943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.596986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.597189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.597230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.597499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.597540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.597685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.597726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.597976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.598018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.598209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.598249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.598439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.598620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.598661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.598866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.598908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.599050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.599090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.599238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.599280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.599489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.599530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.599673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.599714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.600002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.600256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.600296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.600513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.600558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.600738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.600778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.600985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.601027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.601286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.601326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.601473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.601513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.601714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.601754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.601962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.602003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.602215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.602256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.602450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.602490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.602687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.602727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.602885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.602927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.603380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.603420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.603564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.603611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.603816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.603881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.604152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.604193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.604409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.604654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.604694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.604909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.604951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.605214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.605463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.605760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.605800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.605948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.606204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.606245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.606445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.606485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.606678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.606719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.606923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.607170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.607210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.607467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.607507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.607711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.607751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.607996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.608037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.608278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.608319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.608550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.608590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.608785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.608834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.608988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.609029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.609183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.609223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.609419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.609459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.609616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.609655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.609852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.610018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.610058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.610371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.610416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.610643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.610684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.610881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.610923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.611207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.611248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.611403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.611444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.611644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.611685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.611881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.611923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.612129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.612169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.612418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.612629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.612671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.612951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.612993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.613137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.613178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.272 [2024-12-10 00:17:36.613333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.272 [2024-12-10 00:17:36.613374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.272 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.613580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.613626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.613845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.613887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.614097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.614137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.614343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.614383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.614645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.614686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.614845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.614887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.615147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.615187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.615314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.615355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.615545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.615586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.615789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.615841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.616005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.616046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.616274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.616455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.616496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.616778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.616819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.617056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.617101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.617389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.617431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.617668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.617709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.617970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.618011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.618172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.618213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.618416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.618457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.618594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.618844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.618887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.619088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.619129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.619315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.619518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.619558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.619794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.620030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.620212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.620256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.620392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.620432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.620643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.620683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.620900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.620942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.621165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.621206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.621427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.621467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.621726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.621767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.621955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.622090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.622130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.622342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.622383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.622591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.622631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.622932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.622974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.623244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.623285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.623483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.623523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.623669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.623710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.623992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.624034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.624189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.624230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.624363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.624403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.624615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.624655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.624806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.624853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.625085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.625125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.625342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.625382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.625638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.625678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.625967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.626009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.626201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.626241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.626497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.626537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.626745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.273 [2024-12-10 00:17:36.626785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.273 qpair failed and we were unable to recover it. 00:35:52.273 [2024-12-10 00:17:36.627118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.627159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.627380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.627421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.627571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.627612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.627747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.627787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.627974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.628016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.628208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.628249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.628460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.628501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.628763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.628803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.628952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.628994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.629215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.629255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.629456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.629497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.629753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.629794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.630074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.630116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.630396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.630443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.630707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.630748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.631022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.631064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.631271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.631311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.631589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.631630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.631815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.632042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.632084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.632341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.632381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.632665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.632705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.632994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.633037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.633295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.633335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.633599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.633872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.633913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.634112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.634152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.634384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.634424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.634685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.634726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.634929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.635173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.635213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.635412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.635452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.635604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.635858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.635900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.636185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.636227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.636420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.636461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.636674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.636715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.636921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.636963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.637150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.637399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.637629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.637670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.637931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.637973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.638178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.638450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.638490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.638766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.638806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.638947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.638987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.639301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.639341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.639638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.639679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.639889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.639931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.640195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.640235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.640436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.640477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.640669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.640709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.640915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.640958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.641163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.274 [2024-12-10 00:17:36.641209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.274 qpair failed and we were unable to recover it. 00:35:52.274 [2024-12-10 00:17:36.641467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.641508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.641792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.641841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.642123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.642163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.642359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.642399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.642596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.642637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.642781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.642821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.643089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.643130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.643407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.643449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.643594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.643634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.643884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.644167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.644208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.644465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.644505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.644806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.644857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.645200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.645356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.645397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.645601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.645642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.645941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.645982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.646195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.646235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.646522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.646721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.646761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.646892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.646934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.647212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.647253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.647442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.647482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.647692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.647732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.647926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.647968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.648117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.648156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.648464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.648505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.648747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.648982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.649024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.649235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.649276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.649487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.649528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.649746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.649786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.650006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.650048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.650258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.650298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.650531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.650571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.650896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.651152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.651192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.651469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.651510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.651769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.651811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.652077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.652124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.652327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.652367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.652571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.652612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.652750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.652790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.652996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.653037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.653251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.653291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.653504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.653544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.653832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.653874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.654068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.654108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.654255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.654296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.654496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.654534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.654793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.655128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.655169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.655470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.655510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.655723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.655764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.656057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.656100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.656284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.656553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.656594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.656883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.656925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.657083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.657123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.657381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.657421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.657633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.657674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.657806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.657854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.658068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.658109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.658300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.658599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.658640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.658929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.658971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.659244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.659285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.659506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.275 [2024-12-10 00:17:36.659546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.275 qpair failed and we were unable to recover it. 00:35:52.275 [2024-12-10 00:17:36.659761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.659802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.660066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.660106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.660269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.660309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.660451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.660491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.660751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.660792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.661018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.661060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.661206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.661247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.661377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.661417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.661641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.661682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.661845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.661888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.662187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.662509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.662556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.662855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.663157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.663198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.663410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.663451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.663674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.663715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.663932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.663974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.664170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.664211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.664410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.664450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.664654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.664694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.664896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.665133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.665173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.665376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.665417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.665546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.665587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.665781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.665822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.666072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.666114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.666242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.666283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.666487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.666527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.666836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.666878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.667105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.667146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.667356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.667397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.667550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.667590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.667796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.668072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.668112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.668240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.668282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.668583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.668846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.669102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.669142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.669428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.669470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.669679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.669720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.669971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.670013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.670216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.670257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.670398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.670583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.670623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.670841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.670883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.671082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.671283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.671324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.671559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.671818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.671867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.672009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.672049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.672330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.672371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.672572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.672620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.672753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.672793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.673000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.276 [2024-12-10 00:17:36.673041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.276 qpair failed and we were unable to recover it. 00:35:52.276 [2024-12-10 00:17:36.673256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.673297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.673506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.673546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.673852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.673894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.674154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.674195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.674419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.674459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.674670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.674710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.674985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.675027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.675170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.675211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.675336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.675375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.675565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.675604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.675750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.675790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.676098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.676138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.676346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.676385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.676667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.676708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.676917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.676960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.677122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.677161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.677361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.677400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.677599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.677638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.677897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.677938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.678087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.678127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.678370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.678509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.678758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.678798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.679090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.679131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.679278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.679319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.679445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.679484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.679696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.679735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.680018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.680060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.680255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.680296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.680490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.680530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.680821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.680911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.681150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.681190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.681459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.681500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.681759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.681799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.682096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.682137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.682282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.682323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.682601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.682641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.682921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.683203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.683244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.683524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.683565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.683757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.683798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.684007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.684049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.684338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.684378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.684639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.684679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.684892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.684934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.685086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.685126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.685383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.685699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.685740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.686026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.686068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.686325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.686365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.686561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.686602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.686870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.686913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.687193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.687233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.687360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.687682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.687723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.688025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.688066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.688329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.688370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.688631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.688671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.688934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.688976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.689195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.689236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.689465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.689505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.689766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.689806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.277 qpair failed and we were unable to recover it. 00:35:52.277 [2024-12-10 00:17:36.689976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.277 [2024-12-10 00:17:36.690018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.690159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.690199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.690398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.690439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.690696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.690737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.690882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.690923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.691213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.691254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.691445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.691485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.691638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.691677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.691871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.691912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.692170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.692209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.692413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.692458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.692731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.692936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.692977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.693181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.693220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.693422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.693462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.693589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.693635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.693836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.693878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.694023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.694064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.694386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.694585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.694626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.694898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.694941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.695485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.695526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.695720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.695761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.695977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.696018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.696214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.696255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.696464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.696505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.696702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.696742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.697033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.697076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.697228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.697269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.697421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.697462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.697609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.697648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.697972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.698257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.698298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.698578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.698619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.698767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.698808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.699046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.699088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.699335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.699594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.699638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.699914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.699955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.700178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.700219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.700432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.700472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.700553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244af20 (9): Bad file descriptor 00:35:52.278 [2024-12-10 00:17:36.700957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.701038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.701314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.701378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.701584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.701627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.701846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.701889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.702094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.702135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.702441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.702482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.702705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.702746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.702975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.703017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.703255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.703295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.703578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.703619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.703898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.278 [2024-12-10 00:17:36.703940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.278 qpair failed and we were unable to recover it. 00:35:52.278 [2024-12-10 00:17:36.704163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.704204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.704496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.704536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.704769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.704817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.705117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.705160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.705434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.705476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.705624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.705665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.705924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.705967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.706108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.706149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.706341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.706382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.706663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.706704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.706853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.706896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.707103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.707144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.707352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.707393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.707601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.707641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.707870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.707913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.708117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.708175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.708395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.708436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.708692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.708733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.708960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.709002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.709275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.556 [2024-12-10 00:17:36.709316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.556 qpair failed and we were unable to recover it. 00:35:52.556 [2024-12-10 00:17:36.709581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.709623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.709772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.709814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.710100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.710142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.710353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.710394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.710630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.710672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.710839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.710881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.711150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.711191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.711330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.711372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.711577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.711618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.711884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.711928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.712161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.712442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.712483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.712731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.713471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.713605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.713933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.713977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.714258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.714299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.714599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.714640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.714849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.714891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.715153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.715194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.715346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.715387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.715672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.715712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.715909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.715951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.716187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.716265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.716566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.716654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.717013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.717259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.717515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.717556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.717750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.717792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.718006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.718047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.718356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.718398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.718657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.718700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.718965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.719007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.719233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.719274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.719483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.719524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.719775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.719816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.720032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.720080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.720365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.720407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.557 qpair failed and we were unable to recover it. 00:35:52.557 [2024-12-10 00:17:36.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.557 [2024-12-10 00:17:36.720728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.720874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.720917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.721057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.721097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.721306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.721553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.721595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.721751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.721791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.722012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.722054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.722256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.722296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.722553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.722594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.722857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.722899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.723103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.723143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.723462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.723730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.723771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.724007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.724048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.724306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.724347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.724628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.724669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.724975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.725018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.725217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.725258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.725489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.725530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.725752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.725793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.726029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.726070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.726271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.726312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.726593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.726634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.726782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.726832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.727140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.727181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.727433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.727508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.727718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.727761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.728008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.728052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.728332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.728374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.728530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.728570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.728775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.728816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.729138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.729179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.729400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.729441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.729674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.729847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.729890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.730085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.558 [2024-12-10 00:17:36.730125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.558 qpair failed and we were unable to recover it. 00:35:52.558 [2024-12-10 00:17:36.730326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.730367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.730623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.730664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.730899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.730941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.731208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.731249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.731556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.731597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.731805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.731855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.732062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.732103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.732365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.732406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.732608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.732650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.732794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.732844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.733035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.733076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.733333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.733631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.733671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.733819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.733871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.734081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.734122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.734377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.734418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.734618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.734665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.734821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.734875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.735133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.735174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.735372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.735413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.735671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.735712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.735911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.735952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.736109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.736150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.736430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.736471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.736631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.736671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.736879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.737177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.737218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.737462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.737502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.737787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.737836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.737977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.738017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.738308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.738348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.738605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.738645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.559 [2024-12-10 00:17:36.738845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.559 [2024-12-10 00:17:36.738887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.559 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.739143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.739184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.739443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.739484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.739772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.739812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.740085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.740126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.740546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.740586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.740866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.740909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.741128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.741169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.741452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.741492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.741724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.741764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.741907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.742154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.742194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.742494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.742533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.742744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.742785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.743140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.743195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.743468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.743511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.743711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.743752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.744029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.744073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.744294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.744337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.744544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.744585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.744792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.744844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.745063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.745104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.745365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.745405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.745614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.745655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.746198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.746239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.746520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.746561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.746764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.746805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.747095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.747136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.747356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.747673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.747715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.747857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.747899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.748024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.748065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.748347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.748387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.748646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.748687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.748904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.748946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.749228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.749269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.749435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.749481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.560 [2024-12-10 00:17:36.749697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.560 [2024-12-10 00:17:36.749738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.560 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.749951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.749994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.750204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.750246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.750504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.750545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.750805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.750860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.751157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.751199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.751481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.751522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.751728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.751768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.752011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.752053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.752263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.752304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.752537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.752578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.752739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.752778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.753056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.753102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.753407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.753448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.753650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.753689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.753918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.753961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.754219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.754264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.754471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.754510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.754646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.754687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.754951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.754992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.755194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.755234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.755515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.755555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.755707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.755747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.755961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.756002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.756210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.756251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.756512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.756552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.756777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.756838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.757078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.757122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.757358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.757399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.757547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.757587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.757849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.757891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.758051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.758092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.758296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.758338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.561 [2024-12-10 00:17:36.758534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.561 [2024-12-10 00:17:36.758574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.561 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.758712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.758753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.758975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.759276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.759316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.759505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.759545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.759802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.759853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.760047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.760094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.760339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.760380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.760657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.760697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.760893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.760935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.761156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.761197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.761454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.761494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.761750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.761791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.762006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.762287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.762328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.762538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.762579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.762725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.762765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.762975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.763016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.763305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.763345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.763602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.763642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.763883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.764071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.764111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.764387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.764427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.764577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.764618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.764838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.764881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.765189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.765229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.765433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.765474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.765685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.765726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.765924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.765965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.766234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.766484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.766764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.766804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.767065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.767223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.767479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.767521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.767865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.768072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.768112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.768375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.768416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.768626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.768667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.562 [2024-12-10 00:17:36.768867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.562 [2024-12-10 00:17:36.768909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.562 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.769120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.769162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.769367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.769407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.769687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.769728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.769954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.769997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.770246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.770287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.770498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.770539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.770751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.770799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.771007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.771048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.771316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.771356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.771502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.771542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.771673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.771714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.772018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.772252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.772293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.772576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.772617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.772882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.772924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.773156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.773197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.773340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.773381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.773665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.773706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.773852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.773895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.774096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.774137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.774380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.774519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.774559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.774777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.774818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.775088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.775129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.775336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.775377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.775651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.775692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.775897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.775938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.776156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.776197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.776470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.776510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.776722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.776762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.776985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.777026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.777310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.777351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.777479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.777520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.777796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.777962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.778007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.778220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.778260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.778526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.778720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.778761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.563 [2024-12-10 00:17:36.779057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.563 [2024-12-10 00:17:36.779099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.563 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.779302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.779618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.779658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.779858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.779900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.780058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.780309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.780349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.780490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.780531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.780675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.780715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.780925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.780967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.781297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.781339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.781597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.781637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.781917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.781958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.782117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.782158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.782354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.782394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.782689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.782729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.782866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.782908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.783168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.783209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.783338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.783376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.783656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.783696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.783904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.783946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.784158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.784199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.784408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.784447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.784742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.784789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.785037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.785079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.785224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.785264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.785473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.785513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.785710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.785748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.785962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.786000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.786124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.786161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.786436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.786474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.786618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.786912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.786951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.787104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.787142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.787359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.787396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.787597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.787884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.788108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.788146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.788394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.788593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.788631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.788779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.564 [2024-12-10 00:17:36.788816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.564 qpair failed and we were unable to recover it. 00:35:52.564 [2024-12-10 00:17:36.789092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.789131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.789440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.789477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.789607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.789646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.789868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.789907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.790143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.790181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.790460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.790497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.790687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.790915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.790955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.791229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.791277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.791494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.791538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.791686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.791725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.792010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.792049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.792252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.792291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.792550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.792589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.792858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.792900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.793060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.793099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.793362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.793401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.793660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.793699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.793977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.794018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.794301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.794339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.794567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.794606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.794808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.794860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.795007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.795045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.795311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.795435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.795476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.795692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.795733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.795925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.795967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.796159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.796201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.796413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.796454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.796668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.796708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.796942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.796984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.797244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.797284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.797502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.797542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.797854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.797897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.565 [2024-12-10 00:17:36.798044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.565 [2024-12-10 00:17:36.798084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.565 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.798371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.798412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.798543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.798589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.798799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.798854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.799161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.799359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.799399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.799609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.799649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.799789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.800049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.800303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.800343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.800505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.800546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.800689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.800730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.801010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.801052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.801308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.801349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.801506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.801547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.801835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.801877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.802034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.802082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.802365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.802407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.802617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.802657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.802868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.802911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.803140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.803181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.803379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.803419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.803653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.803693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.803914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.803957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.804110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.804151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.804365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.804405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.804545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.804585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.804866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.804908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.805057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.805098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.805289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.805338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.805556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.805596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.805838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.805880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.806087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.806128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.806266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.806306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.806508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.806548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.806767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.806808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.807017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.807059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.807317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.807357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.807618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.807658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.807862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.566 [2024-12-10 00:17:36.807904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.566 qpair failed and we were unable to recover it. 00:35:52.566 [2024-12-10 00:17:36.808134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.808174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.808325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.808365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.808517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.808558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.808868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.808911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.809173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.809214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.809455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.809495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.809714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.809755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.809975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.810017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.810306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.810556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.810598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.810746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.810787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.810942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.810983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.811174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.811215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.811474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.811514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.811777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.811818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.812036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.812077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.812393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.812441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.812652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.812694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.812933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.812978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.813189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.813231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.813515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.813556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.813841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.813883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.814039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.814227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.814267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.814478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.814519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.814745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.814786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.815054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.815095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.815242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.815283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.815544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.815587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.815858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.815907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.816062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.816103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.816334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.816379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.816626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.816666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.816810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.816868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.817160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.817201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.817471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.817512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.817779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.817820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.818018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.567 [2024-12-10 00:17:36.818341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.567 qpair failed and we were unable to recover it. 00:35:52.567 [2024-12-10 00:17:36.818550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.818591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.818859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.818901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.819217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.819257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.819488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.819529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.819801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.820030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.820071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.820296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.820336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.820538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.820788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.820836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.821120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.821161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.821371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.821412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.821678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.821719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.821933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.821975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.822256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.822580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.822621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.822847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.822890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.823100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.823141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.823346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.823394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.823538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.823580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.823791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.824053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.824093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.824320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.824360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.824553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.824593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.824879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.824920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.825053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.825094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.825351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.825391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.825670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.825711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.825916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.825957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.826258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.826412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.826452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.826713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.826763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.827000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.827041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.568 [2024-12-10 00:17:36.827188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.568 [2024-12-10 00:17:36.827228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.568 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.827379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.827419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.827723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.827763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.827901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.827943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.828143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.828183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.828302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.828343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.828546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.828586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.828805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.828855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.829133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.829174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.829385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.829425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.829566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.829606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.829863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.829906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.830172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.830214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.830438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.830477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.830698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.830739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.830941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.830983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.831113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.831153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.831282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.831321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.831523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.831562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.831754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.831794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.832019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.832060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.832342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.832382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.832605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.832645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.832897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.832938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.833197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.833237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.833405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.833456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.833683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.833724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.833982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.834191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.834231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.834378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.834419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.834624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.834665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.834814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.834865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.835057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.835099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.835320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.835361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.835514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.835554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.835861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.835903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.836129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.836170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.836325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.836366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.836513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.569 [2024-12-10 00:17:36.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.569 qpair failed and we were unable to recover it. 00:35:52.569 [2024-12-10 00:17:36.836846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.836888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.837101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.837142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.837336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.837377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.837671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.837711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.837931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.837974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.838116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.838462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.838503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.838801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.838852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.839096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.839137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.839361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.839401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.839625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.839666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.839936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.839979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.840215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.840255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.840552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.840599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.840789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.840840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.841101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.841142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.841352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.841393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.841629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.841923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.841966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.842185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.842226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.842495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.842735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.842775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.842989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.843032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.843298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.843338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.843597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.843638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.843943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.844101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.844141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.844435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.844476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.844733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.844774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.844994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.845036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.845507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.845548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.845859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.845902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.846204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.846245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.846453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.570 [2024-12-10 00:17:36.846493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.570 qpair failed and we were unable to recover it. 00:35:52.570 [2024-12-10 00:17:36.846708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.846749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.847016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.847058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.847260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.847300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.847583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.847623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.847767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.847807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.848009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.848056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.848341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.848588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.848629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.848835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.849092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.849133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.849347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.849388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.849582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.849623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.849934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.849977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.850256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.850299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.850505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.850546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.850756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.850796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.851077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.851118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.851407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.851448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.851697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.851998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.852040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.852296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.852337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.852539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.852580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.852880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.852922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.853049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.853090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.853287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.853328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.853674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.853927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.853969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.854242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.854283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.854487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.854528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.854768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.854808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.855026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.855069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.855375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.855621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.855667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.855820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.855870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.856130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.856171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.856383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.856424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.856669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.856800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.856848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.857129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.857170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.571 [2024-12-10 00:17:36.857430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.571 [2024-12-10 00:17:36.857470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.571 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.857678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.857718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.857959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.858002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.858297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.858338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.858492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.858533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.858842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.858884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.859163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.859203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.859439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.859486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.859751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.859792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.860008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.860050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.860335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.860375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.860657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.860698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.860997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.861040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.861251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.861443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.861484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.861637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.861678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.861908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.861952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.862162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.862203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.862461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.862501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.862653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.862694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.862976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.863025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.863319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.863360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.863597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.863640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.863898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.863940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.864154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.864195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.864401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.864442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.864656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.864698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.864845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.864887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.865060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.865342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.865383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.865587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.865843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.865886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.866099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.866140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.866371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.866411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.866698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.866999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.867042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.867257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.867298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.867510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.867550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.867841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.867883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.572 [2024-12-10 00:17:36.868166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.572 [2024-12-10 00:17:36.868207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.572 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.868497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.868537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.868683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.868724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.868951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.868993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.869210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.869250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.869524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.869565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.869848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.869890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.870165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.870206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.870482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.870532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.870800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.870850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.871046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.871087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.871304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.871345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.871536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.871576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.871770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.871811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.872051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.872092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.872363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.872708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.872750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.872914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.872956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.873166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.873206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.873481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.873521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.873677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.873717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.873949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.873999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.874196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.874236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.874375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.874417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.874612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.874652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.874857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.874898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.875179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.875220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.875432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.875473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.875630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.875669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.875864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.875905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.876041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.876081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.876289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.876331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.876842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.876884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.877170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.877211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.877446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.877487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.877813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.878038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.878079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.878213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.878254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.573 [2024-12-10 00:17:36.878553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.573 [2024-12-10 00:17:36.878594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.573 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.878740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.879019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.879065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.879230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.879481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.879523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.879789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.879841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.880128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.880169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.880470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.880511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.880657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.880698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.880936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.880988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.881145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.881187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.881340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.881664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.881705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.881973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.882015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.882153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.882194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.882398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.882439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.882633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.882675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.882887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.882928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.883126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.883167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.883422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.883702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.883743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.883986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.884028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.884269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.884317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.884595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.884636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.884862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.884905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.885189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.885230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.885462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.885503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.885780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.885821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.885977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.886018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.886298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.886550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.886591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.886898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.886940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.887081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.887122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.887425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.887685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.574 [2024-12-10 00:17:36.887725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.574 qpair failed and we were unable to recover it. 00:35:52.574 [2024-12-10 00:17:36.887935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.887977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.888179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.888220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.888478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.888518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.888714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.888756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.888995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.889037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.889263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.889303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.889459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.889499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.889756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.889797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.890022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.890062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.890347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.890388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.890513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.890555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.890759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.890800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.891133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.891416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.891736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.891786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.892092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.892137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.892359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.892400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.892556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.892603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.892896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.893143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.893184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.893442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.893483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.893623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.893664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.893810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.893861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.894066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.894105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.894296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.894335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.894617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.894658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.894850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.894892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.895104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.895146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.895459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.895500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.895758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.895799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.896044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.896085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.896296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.896337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.896614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.896655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.896951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.896994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.897199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.897499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.897539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.897675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.897716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.897870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.897912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.898194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.575 [2024-12-10 00:17:36.898235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.575 qpair failed and we were unable to recover it. 00:35:52.575 [2024-12-10 00:17:36.898494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.898535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.898809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.898861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.899077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.899118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.899325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.899366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.899568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.899609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.899870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.899911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.900182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.900486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.900527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.900667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.900707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.900969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.901011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.901207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.901247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.901527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.901568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.901762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.901803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.902097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.902139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.902424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.902465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.902694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.902742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.902972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.903014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.903231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.903272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.903480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.903521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.903811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.903859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.904147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.904188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.904383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.904424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.904722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.904763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.905036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.905078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.905401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.905548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.905590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.905807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.905860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.906086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.906127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.906410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.906451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.906739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.906780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.907107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.907154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.907363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.907405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.907602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.907855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.907897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.908028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.908069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.908279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.908320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.908468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.908509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.908732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.908773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.908911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.908954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.576 qpair failed and we were unable to recover it. 00:35:52.576 [2024-12-10 00:17:36.909183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.576 [2024-12-10 00:17:36.909223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.909519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.909561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.909757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.910130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.910173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.910324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.910364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.910698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.910910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.910951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.911234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.911274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.911543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.911583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.911738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.911778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.912092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.912134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.912393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.912434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.912693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.912734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.912929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.912971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.913257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.913297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.913557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.913599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.913759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.913807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.914028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.914069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.914346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.914386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.914646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.914907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.914949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.915157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.915198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.915403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.915444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.915660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.915958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.916000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.916149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.916189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.916391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.916431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.916560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.916599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.916793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.916843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.917063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.917104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.917314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.917354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.917562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.917602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.917809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.917862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.918078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.918341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.918380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.918642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.918683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.918885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.918927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.577 [2024-12-10 00:17:36.919134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.577 [2024-12-10 00:17:36.919173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.577 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.919449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.919490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.919717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.919757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.920060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.920306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.920346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.920539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.920581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.920911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.920958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.921184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.921225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.921500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.921541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.921746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.921787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.922062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.922103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.922318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.922359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.922487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.922528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.922737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.922778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.923053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.923096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.923294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.923334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.923536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.923576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.923861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.923903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.924189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.924229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.924371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.924418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.924603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.924845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.924888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.925096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.925137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.925342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.925383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.925584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.925625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.925819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.925871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.926157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.926199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.926432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.926473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.926664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.926705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.926901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.926943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.927138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.927179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.927375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.927415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.927715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.927986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.928030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.928264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.928587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.928917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.928961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.929088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.929129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.929398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.929439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.578 [2024-12-10 00:17:36.929721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.578 [2024-12-10 00:17:36.929762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.578 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.929963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.930005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.930267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.930307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.930507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.930549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.930749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.931040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.931082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.931289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.931331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.931541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.931587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.931895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.932176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.932216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.932483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.932523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.932803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.932853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.933135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.933176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.933379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.933420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.933701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.933742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.933966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.934008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.934261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.934545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.934586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.934791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.934841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.934987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.935028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.935246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.935293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.935578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.935877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.936159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.936199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.936467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.936508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.936734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.936775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.937020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.937061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.937283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.937323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.937569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.937610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.937896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.937938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.938129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.938170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.938437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.938477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.938751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.938951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.938993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.939272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.939313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.579 qpair failed and we were unable to recover it. 00:35:52.579 [2024-12-10 00:17:36.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.579 [2024-12-10 00:17:36.939562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.939794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.939843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.940045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.940086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.940227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.940267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.940527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.940569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.940708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.940749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.941016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.941057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.941346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.941387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.941687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.941728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.941987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.942030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.942190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.942232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.942437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.942480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.942778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.942838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.943009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.943050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.943264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.943305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.943443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.943484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.943744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.943785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.943930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.943973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.944177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.944218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.944352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.944392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.944611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.944651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.944799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.944855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.945066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.945106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.945299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.945339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.945599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.945640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.945775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.945816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.946081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.946282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.946323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.946468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.946509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.946717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.946758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.947032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.947076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.947290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.947330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.947542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.947583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.947848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.947891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.948105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.948145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.948353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.948393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.948581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.580 qpair failed and we were unable to recover it. 00:35:52.580 [2024-12-10 00:17:36.948842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.580 [2024-12-10 00:17:36.948884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.949015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.949056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.949314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.949361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.949556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.949598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.949834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.949875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.950086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.950127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.950431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.950473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.950699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.950740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.950964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.951006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.951223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.951264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.951497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.951538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.951679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.951720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.951932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.951975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.952127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.952168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.952364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.952405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.952605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.952646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.952943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.952986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.953181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.953221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.953500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.953541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.953802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.953852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.954133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.954174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.954367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.954408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.954634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.954675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.954939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.954981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.955260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.955540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.955581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.955791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.955840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.956099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.956140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.956347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.956610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.956657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.956804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.956855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.957010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.957341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.957382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.957640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.957681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.957891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.957934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.958139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.958180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.958479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.958520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.958681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.958722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.958918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.958960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.581 [2024-12-10 00:17:36.959247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.581 [2024-12-10 00:17:36.959289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.581 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.959483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.959525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.959740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.959781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.960015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.960065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.960276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.960317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.960521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.960561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.960838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.960880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.961143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.961184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.961389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.961428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.961656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.961697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.961900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.962108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.962148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.962276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.962317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.962572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.962613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.962812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.962866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.963126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.963167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.963474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.963515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.963789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.963854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.964079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.964120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.964354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.964394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.964610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.964650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.964927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.964969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.965173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.965214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.965446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.965486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.965625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.965665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.965972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.966014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.966255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.966296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.966431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.966471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.966743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.966783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.967091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.967132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.967378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.967418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.967575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.967615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.967816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.967867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.968065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.968105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.968387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.968427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.968689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.968928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.968971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.969124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.969164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.969421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.969461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.582 qpair failed and we were unable to recover it. 00:35:52.582 [2024-12-10 00:17:36.969673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.582 [2024-12-10 00:17:36.969714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.969909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.969950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.970092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.970132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.970335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.970376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.970587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.970626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.970779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.970820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.971040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.971361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.971402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.971687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.971728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.971934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.971976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.972202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.972242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.972452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.972493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.972752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.972792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.972950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.972992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.973194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.973234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.973503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.973543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.973794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.973845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.974065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.974106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.974308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.974355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.974643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.974684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.974941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.974983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.975180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.975523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.975564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.975868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.975910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.976059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.976099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.976297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.976337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.976490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.976531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.976811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.976859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.977149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.977189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.977397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.977577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.977617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.583 qpair failed and we were unable to recover it. 00:35:52.583 [2024-12-10 00:17:36.977836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.583 [2024-12-10 00:17:36.977893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.978176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.978217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.978354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.978395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.978675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.978715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.978858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.978899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.979031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.979070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.979202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.979241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.979448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.979489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.979763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.979803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.980085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.980406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.980669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.980710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.980914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.980956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.981280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.981320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.981594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.981753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.982085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.982127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.982282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.982322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.982530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.982570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.982856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.982899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.983103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.983143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.983284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.983324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.983521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.983561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.983869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.983910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.984100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.984141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.984331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.984371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.984573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.984613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.984870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.984918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.985224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.985264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.985551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.985591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.985794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.985858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.584 qpair failed and we were unable to recover it. 00:35:52.584 [2024-12-10 00:17:36.986055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.584 [2024-12-10 00:17:36.986096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.986371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.986411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.986614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.986654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.986858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.986900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.987038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.987078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.987355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.987396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.987589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.987630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.987832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.987874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.988157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.988197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.988389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.988430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.988754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.988894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.988935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.989224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.989265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.989466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.989507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.989741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.989782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.990048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.990090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.990310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.990350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.990559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.990599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.990804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.990855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.990999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.991039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.991239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.991280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.991580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.991621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.991861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.992136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.992183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.992379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.992420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.992694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.992735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.992968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.993155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.993194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.993401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.993439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.993711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.993750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.993927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.993967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.994194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.994232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.994386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.994424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.994700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.994738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.994951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.994990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.995247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.995286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.995506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.995550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.995743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.996055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.996098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.585 [2024-12-10 00:17:36.996360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.585 [2024-12-10 00:17:36.996398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.585 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.996655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.996693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.996955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.996994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.997185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.997411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.997450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.997613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.997651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.997910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.997949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.998159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.998198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.998396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.998434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.998760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.998799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.999023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.999064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.999252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.999483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.999522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.999727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.999766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:36.999929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:36.999969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.000161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.000200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.000459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.000499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.000633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.000670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.000814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.000864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.001070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.001109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.001366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.001404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.001536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.001575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.001778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.001817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.002058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.002288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.002337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.002536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.002578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.002728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.002768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.002931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.002973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.003227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.003426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.003677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.003718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.003934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.003977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.004248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.004288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.004494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.004534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.004672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.004714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.004914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.005215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.005256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.005393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.005441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.586 [2024-12-10 00:17:37.005756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.586 qpair failed and we were unable to recover it. 00:35:52.586 [2024-12-10 00:17:37.005981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.006254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.006295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.006422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.006722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.006763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.006911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.006952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.007231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.007272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.007474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.007515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.007652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.007693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.007910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.007952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.008111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.008151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.008346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.008386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.008706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.008855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.009104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.009145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.009275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.009316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.009514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.009555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.009800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.009848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.010070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.010111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.010304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.010345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.010542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.010719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.587 [2024-12-10 00:17:37.010760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.587 qpair failed and we were unable to recover it. 00:35:52.587 [2024-12-10 00:17:37.010914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.870 [2024-12-10 00:17:37.010957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.870 qpair failed and we were unable to recover it. 00:35:52.870 [2024-12-10 00:17:37.011176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.870 [2024-12-10 00:17:37.011217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.870 qpair failed and we were unable to recover it. 00:35:52.870 [2024-12-10 00:17:37.011423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.870 [2024-12-10 00:17:37.011464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.870 qpair failed and we were unable to recover it. 00:35:52.870 [2024-12-10 00:17:37.011679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.870 [2024-12-10 00:17:37.011721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa974000b90 with addr=10.0.0.2, port=4420 00:35:52.870 qpair failed and we were unable to recover it. 00:35:52.870 [2024-12-10 00:17:37.011968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.870 [2024-12-10 00:17:37.012024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.870 qpair failed and we were unable to recover it. 00:35:52.870 [2024-12-10 00:17:37.012255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.012299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.012443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.012484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.012613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.012654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.012858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.012902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.013155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.013195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.013408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.013449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.013657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.013699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.013841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.013884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.014053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.014094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.014253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.014294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.014526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.014567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.014712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.014752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.014961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.015003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.015223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.015265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.015390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.015431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.015619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.015660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.015858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.015901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.016097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.016139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.016333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.016373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.016593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.016635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.016768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.016810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.017023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.017065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.017193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.017234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.017502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.017544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.017755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.017795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.018001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.018042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.018202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.018251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.018376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.018402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.018636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.018661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.018839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.018865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.019867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.019891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.020055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.020080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.020243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.020269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.020446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.871 [2024-12-10 00:17:37.020470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.871 qpair failed and we were unable to recover it. 00:35:52.871 [2024-12-10 00:17:37.020635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.020661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.020781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.020807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.020941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.020966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.021941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.021968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.022926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.022951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.023116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.023142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.023255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.023280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.023442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.023466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.023627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.023652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.023815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.023849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.024914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.024939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.025108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.025133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.025240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.025433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.025458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.025553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.025755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.025782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.872 [2024-12-10 00:17:37.026958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.872 [2024-12-10 00:17:37.026983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.872 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.027831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.027856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.028908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.028935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.029847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.029872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.030858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.030884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.031911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.031939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.032952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.033138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.033163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.033257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.033282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.033515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.873 [2024-12-10 00:17:37.033540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.873 qpair failed and we were unable to recover it. 00:35:52.873 [2024-12-10 00:17:37.033632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.033657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.033759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.033784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.033965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.033991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.034921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.034946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.035863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.035887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.036750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.036778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.037043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.037069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.037319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.037344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.037534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.037756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.037959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.037985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.038161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.038186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.038349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.038374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.038530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.038555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.038643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.038668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.038790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.039844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.039869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.040027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.040052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.040281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.874 [2024-12-10 00:17:37.040306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.874 qpair failed and we were unable to recover it. 00:35:52.874 [2024-12-10 00:17:37.040481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.040506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.040598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.040623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.040725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.040751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.040919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.040944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.041902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.041928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.042913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.042938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.043163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.043187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.043298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.043323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.043504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.043529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.043702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.043727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.043958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.043985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.044097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.044123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.044285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.044309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.044597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.044622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.044793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.044819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.044940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.044965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.045063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.045088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.045436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.045461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.045630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.045655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.045880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.045909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.046018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.875 [2024-12-10 00:17:37.046043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.875 qpair failed and we were unable to recover it. 00:35:52.875 [2024-12-10 00:17:37.046203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.046228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.046384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.046409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.046567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.046592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.046697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.046722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.046879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.046905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.047953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.047979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.048087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.048112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.048367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.048392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.048565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.048590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.048763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.048788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.048912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.048938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.049874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.049901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.050915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.051803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.051984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.052010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.052099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.052123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.052292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.052422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.052447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.052624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.876 [2024-12-10 00:17:37.052653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.876 qpair failed and we were unable to recover it. 00:35:52.876 [2024-12-10 00:17:37.052764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.052789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.052914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.052940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.053138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.053400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.053580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.053606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.053803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.053835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.053953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.053978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.054080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.054105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.054298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.054323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.054480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.054506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.054728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.054753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.054848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.054873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.055124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.055150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.055351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.055377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.055484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.055509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.055704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.055729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.055890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.055916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.056093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.056117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.056299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.056324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.056492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.056517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.056692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.056717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.056879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.056905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.057150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.057175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.057400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.057425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.057533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.057558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.057721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.057746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.057923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.057949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.058183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.058209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.058441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.058560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.058733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.058758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.058869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.058895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.059096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.059230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.059255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.059438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.059463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.059709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.059734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.059962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.059988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.060151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.060176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.060276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.060299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.877 qpair failed and we were unable to recover it. 00:35:52.877 [2024-12-10 00:17:37.060571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.877 [2024-12-10 00:17:37.060600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.060704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.060729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.060889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.060915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.061912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.062032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.062058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.062218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.062244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.062508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.062533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.062636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.062836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.062861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.063042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.063069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.063228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.063253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.063467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.063492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.063684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.063710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.063874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.063908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.064788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.065939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.065965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.066857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.066883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.067054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.067079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.067171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.067196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.878 [2024-12-10 00:17:37.067358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.878 [2024-12-10 00:17:37.067383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.878 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.067545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.067745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.067771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.067941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.068039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.068064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.068227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.068253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.068425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.068450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.068610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.068635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.068811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.068842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.069804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.069833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.070899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.070923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.071083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.071108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.071210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.071235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.071431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.071456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.071564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.071589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.071834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.072044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.072069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.072292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.072318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.072488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.072514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.072614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.072639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.072869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.072895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.073016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.073041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.073157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.073182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.073360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.073385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.073555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.879 [2024-12-10 00:17:37.073579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.879 qpair failed and we were unable to recover it. 00:35:52.879 [2024-12-10 00:17:37.073686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.073711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.073837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.073863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.073971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.073997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.074161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.074186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.074291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.074316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.074413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.074437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.074542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.074570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.074756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.074780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.075037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.075063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.075182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.075206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.075374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.075399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.075649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.075817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.075847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.076939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.076966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.077129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.077154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.077317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.077342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.077566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.077592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.077701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.077725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.077836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.077862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.078046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.078070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.078229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.078253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.078430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.078455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.078630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.078655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.078853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.078879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.079131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.079260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.079285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.079371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.079394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.079581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.079763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.079789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.880 qpair failed and we were unable to recover it. 00:35:52.880 [2024-12-10 00:17:37.080653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.880 [2024-12-10 00:17:37.080678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.080858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.080885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.081068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.081094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.081264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.081288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.081463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.081488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.081613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.081638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.081820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.081850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.082090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.082119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.082390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.082637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.082662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.082821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.082852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.082953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.083079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.083104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.083263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.083288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.083382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.083406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.083565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.083590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.083865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.084921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.084947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.085951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.085978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.086202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.086227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.086315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.086339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.086453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.086580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.086605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.086848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.086903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.087136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.087178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.087384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.087425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243d000 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.087613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.087641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.087757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.881 [2024-12-10 00:17:37.087782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.881 qpair failed and we were unable to recover it. 00:35:52.881 [2024-12-10 00:17:37.087906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.087932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.088111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.088136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.088382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.088408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.088565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.088590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.088750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.088775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.089007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.089033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.089306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.089497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.089522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.089688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.089713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.089881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.089907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.090090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.090116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.090226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.090251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.090503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.090528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.090723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.090748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.090971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.090997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.091161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.091185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.091386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.091412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.091639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.091664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.091831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.091857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.091948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.091973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.092875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.092901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.093086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.093206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.093231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.093401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.093426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.093602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.093628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.093852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.094053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.094078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.094255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.094279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.094398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.094422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.094645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.094670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.094839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.094869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.095028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.095054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.095253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.095278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.095384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.095409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.882 qpair failed and we were unable to recover it. 00:35:52.882 [2024-12-10 00:17:37.095516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.882 [2024-12-10 00:17:37.095542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.095762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.095787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.095996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.096022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.096246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.096271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.096488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.096513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.096626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.096851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.096877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.097071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.097096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.097198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.097223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.097470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.097495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.097682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.097708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.097815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.097845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.098805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.098984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.099180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.099314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.099458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.099799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.099844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.100089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.100114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.100218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.100243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.100425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.100450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.100560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.100585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.100809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.100841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.101891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.101916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.102113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.102138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.102235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.102263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.102436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.102461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.102567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.102592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.883 [2024-12-10 00:17:37.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.883 [2024-12-10 00:17:37.102775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.883 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.102938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.102964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.103135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.103160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.103354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.103469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.103494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.103649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.103675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.103856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.103883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.104136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.104161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.104406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.104432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.104549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.104574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.104770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.104795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.104976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.105958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.105982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.106156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.106181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.106297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.106322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.106543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.106569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.106974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.107157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.107349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.107621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.107818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.107957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.107982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.108156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.108181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.108361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.108387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.108478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.108503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.108753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.108778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.109054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.109080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.109255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.109280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.109438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.109463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.884 qpair failed and we were unable to recover it. 00:35:52.884 [2024-12-10 00:17:37.109656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.884 [2024-12-10 00:17:37.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.109864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.110131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.110160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.110368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.110392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.110564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.110589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.110782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.110960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.110986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.111145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.111171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.111389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.111612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.111636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.111808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.111851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.112929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.112956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.113161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.113254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.113279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.113471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.113496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.113753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.113779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.113961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.113987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.114113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.114138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.114336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.114362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.114532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.114557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.114805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.114834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.115941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.115966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.116198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.116223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.116326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.116351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.116814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.116847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.117047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.117073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.117240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.117266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.117428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.117454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.885 [2024-12-10 00:17:37.117677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.885 [2024-12-10 00:17:37.117702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.885 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.117952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.117978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.118133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.118158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.118361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.118390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.118491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.118517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.118793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.118817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.118934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.118960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.119126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.119151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.119339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.119363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.119468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.119493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.119742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.119767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.119882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.119908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.120066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.120091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.120175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.120198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.120459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.120484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.120757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.120917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.120942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.121139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.121164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.121362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.121483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.121508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.121705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.121730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.121978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.122003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.122280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.122305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.122529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.122719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.122744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.122907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.122933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.123125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.123397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.123579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.123699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.123892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.123996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.124145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.124329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.124452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.124644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.124838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.124863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.125049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.125283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.886 [2024-12-10 00:17:37.125308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.886 qpair failed and we were unable to recover it. 00:35:52.886 [2024-12-10 00:17:37.125418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.125694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.125719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.125897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.125922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.126146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.126171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.126339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.126371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.126650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.126676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.126865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.126891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.126999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.127024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.127181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.127205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.127449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.127474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.127594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.127619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.127776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.127801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.127993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.128018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.128252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.128276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.128501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.128526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.128699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.128724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.128832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.128858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.129109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.129134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.129298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.129323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.129539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.129563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.129734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.129758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.129928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.129954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.130113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.130138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.130292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.130317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.130771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.130796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.130986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.131236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.131428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.131641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.131780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.131921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.131946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.132066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.132091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.132372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.132397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.132632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.132657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.132753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.132776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.132881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.132907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.133132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.133156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.133380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.887 [2024-12-10 00:17:37.133405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.887 qpair failed and we were unable to recover it. 00:35:52.887 [2024-12-10 00:17:37.133581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.133606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.133767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.133792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.134019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.134045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.134238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.134397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.134421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.134691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.134845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.134871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.135045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.135070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.135295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.135319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.135566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.135591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.135703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.135727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.135928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.135955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.136116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.136140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.136255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.136279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.136526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.136695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.136720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.136878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.136904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.137930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.138879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.138999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.139205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.139373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.139573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.139764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.139917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.888 [2024-12-10 00:17:37.139943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.888 qpair failed and we were unable to recover it. 00:35:52.888 [2024-12-10 00:17:37.140100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.140125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.140366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.140391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.140656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.140680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.140898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.141003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.141028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.141254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.141278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.141447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.141698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.141723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.141994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.142168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.142193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.142435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.142464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.142568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.142593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.142726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.142925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.142951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.143089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.143291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.143472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.143612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.143998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.144182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.144207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.144455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.144655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.144680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.144837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.144863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.145091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.145116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.145288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.145313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.145585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.145611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.145865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.145891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.145998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.146023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.146249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.146275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.146475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.146499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.146741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.146766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.147023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.147157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.147181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.147404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.147429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.147690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.147781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.147804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.148076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.148286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.889 [2024-12-10 00:17:37.148310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.889 qpair failed and we were unable to recover it. 00:35:52.889 [2024-12-10 00:17:37.148484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.148509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.148733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.148758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.148923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.148948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.149117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.149142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.149338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.149362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.149537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.149561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.149786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.149811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.149907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.150110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.150134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.150225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.150250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.150470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.150496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.150664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.150691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.150923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.150948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.151113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.151138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.151305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.151330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.151557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.151582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.151751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.151777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.152015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.152041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.152198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.152222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.152445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.152470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.152639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.152664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.152944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.152970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.153080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.153105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.153191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.153214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.153439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.153463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.153652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.153677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.890 [2024-12-10 00:17:37.153863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.153890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:52.890 [2024-12-10 00:17:37.154083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.154109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.154284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.154309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.890 [2024-12-10 00:17:37.154532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.154558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.890 [2024-12-10 00:17:37.154731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.154756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.154929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.154955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.890 [2024-12-10 00:17:37.155126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.155152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.155321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.155345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.155440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.155465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.155622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.155647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.155903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.890 [2024-12-10 00:17:37.155930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.890 qpair failed and we were unable to recover it. 00:35:52.890 [2024-12-10 00:17:37.156130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.156155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.156325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.156350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.156514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.156538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.156719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.156743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.156852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.156877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.157093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.157269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.157294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.157481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.157507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.157681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.157706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.157804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.157833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.158113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.158248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.158433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.158657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.158876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.158997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.159022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.159294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.159536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.159561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.159808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.159839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.160104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.160287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.160417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.160658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.160802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.161133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.161329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.161462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.161590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.161785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.161812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.162090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.162114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.162279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.162304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.162547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.162574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.162739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.162765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.162941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.162967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.163153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.163177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.163347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.163496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.163522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.163763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.163788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.891 qpair failed and we were unable to recover it. 00:35:52.891 [2024-12-10 00:17:37.163920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.891 [2024-12-10 00:17:37.163946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.164155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.164376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.164402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.164620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.164646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.164766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.164791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.165899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.165924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.166905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.166931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.167090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.167115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.167318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.167343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.167540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.167566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.167790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.167814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.167980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.168004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.168190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.168215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.168401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.168426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.168530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.168554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.168778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.168804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.168993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.169893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.169986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.170808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.170992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.892 [2024-12-10 00:17:37.171017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.892 qpair failed and we were unable to recover it. 00:35:52.892 [2024-12-10 00:17:37.171179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.171205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.171394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.171419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.171528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.171554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.171652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.171676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.172032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.172057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.172225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.172250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.172361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.172386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.172611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.172636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.173017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.173042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.173266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.173292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.173486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.173659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.173684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.173875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.174067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.174093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.174264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.174289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.174454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.174479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.174649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.174674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.174899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.174925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.175921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.175945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.176168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.176194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.176295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.176320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.176496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.176520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.176711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.176736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.176898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.176924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.177107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.177133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.177262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.177287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.177392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.177417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.177519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.893 qpair failed and we were unable to recover it. 00:35:52.893 [2024-12-10 00:17:37.177717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.893 [2024-12-10 00:17:37.177742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.177890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.177916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.178017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.178042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.178203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.178227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.178457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.178481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.178653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.178679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.178959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.178985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.179080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.179105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.179259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.179284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.179392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.179662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.179686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.179849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.179874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.180894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.180921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.181108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.181300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.181513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.181652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.181864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.181977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.182892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.182916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.183923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.183948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.184049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.184074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.184185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.184210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.184404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.894 [2024-12-10 00:17:37.184540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.894 [2024-12-10 00:17:37.184564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.894 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.184657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.184683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.184926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.184951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.185862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.185888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.186867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.186892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.187062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.187087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.187293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.187318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.187545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.187569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.187693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.187718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.187847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.187872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.188938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.188964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.189073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.189097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.189207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.189231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.189478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.189503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.189673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.189698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.189812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.189852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.190972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.190997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.191189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.895 [2024-12-10 00:17:37.191214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.895 qpair failed and we were unable to recover it. 00:35:52.895 [2024-12-10 00:17:37.191332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.191356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.191462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.191487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.191692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.191717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.191817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.191856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.191965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.191989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.192896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.192922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.193958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.193983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.194164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.194189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.194351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.194376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.194490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.194515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.194669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.194697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.194807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.194837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.195082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.195107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.195298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.195323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.195441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.195634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.195659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.195838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.195863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.196934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.196960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.197067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.197203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.197335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.197522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.896 [2024-12-10 00:17:37.197702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.896 qpair failed and we were unable to recover it. 00:35:52.896 [2024-12-10 00:17:37.197886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.896 [2024-12-10 00:17:37.197912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:52.897 [2024-12-10 00:17:37.198113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.198276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.897 [2024-12-10 00:17:37.198398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.198536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.897 [2024-12-10 00:17:37.198655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.198775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.198910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.198936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.199962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.199986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.200927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.200952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.201954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.202962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.202987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.203179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.203204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.897 [2024-12-10 00:17:37.203313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.897 [2024-12-10 00:17:37.203338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.897 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.203564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.203590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.203710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.203811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.203840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.203947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.203971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.204177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.204497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.204698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.204895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.204990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.205837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.205863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.206958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.206984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.207141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.207167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.207342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.207367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.207560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.207675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.207881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.207907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.208083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.208108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.208208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.208233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.208324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.208349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.208573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.208598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.208827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.208853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.898 [2024-12-10 00:17:37.209943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.898 [2024-12-10 00:17:37.209968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.898 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.210146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.210171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.210357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.210383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.210491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.210516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.210674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.210699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.210870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.210897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.211078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.211181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.211205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.211397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.211653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.211679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.211799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.211830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.212813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.213964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.213989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.214223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.214248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.214404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.214429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.214668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.214693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.214855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.214881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.215967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.215993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.216185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.216210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.216368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.216392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.216574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.216598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.216725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.216750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.216857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.216883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.217079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.899 [2024-12-10 00:17:37.217105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.899 qpair failed and we were unable to recover it. 00:35:52.899 [2024-12-10 00:17:37.217209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.217234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.217472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.217498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.217660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.217685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.217843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.217869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.218813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.218844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.219839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.219996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.220900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.220925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.221838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.221863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.222040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.222066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.222172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.222197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.222400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.222425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.222675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.222700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.222898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.222925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.223054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.223302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.223499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.223638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.900 [2024-12-10 00:17:37.223765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.900 qpair failed and we were unable to recover it. 00:35:52.900 [2024-12-10 00:17:37.223930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.223956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.224112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.224137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.224309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.224334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.224585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.224609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.224862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.224887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.225093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.225229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.225440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.225625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.225819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.225997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.226112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.226311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.226538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.226660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.226873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.227099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.227129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.227321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.227346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.227487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.227654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.227679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.227844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.227870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.228923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.228949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.229139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.229165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.229328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.229354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.229514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.229540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.229769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.229795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.229900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.230110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.230136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.230251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.901 [2024-12-10 00:17:37.230466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.901 qpair failed and we were unable to recover it. 00:35:52.901 [2024-12-10 00:17:37.230632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.230657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.230909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.230936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.231031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.231057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.231190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.231356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.231383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.231541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.231566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.231813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.231844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.232895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.232922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.233068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.233316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.233533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.233721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.233866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.234180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.234309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.234510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.234652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.234856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.234883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.235000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.235026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.235194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.235220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.235392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.235418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.235583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.235608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.235854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.235881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.236109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 Malloc0 00:35:52.902 [2024-12-10 00:17:37.236342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.236367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.236535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.236560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.236748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.236773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.236941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.236967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.237063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.902 [2024-12-10 00:17:37.237088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.237304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 [2024-12-10 00:17:37.237392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.237416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.902 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:52.902 [2024-12-10 00:17:37.237610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.902 [2024-12-10 00:17:37.237636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.902 qpair failed and we were unable to recover it. 00:35:52.903 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.903 [2024-12-10 00:17:37.237902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.237928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.903 [2024-12-10 00:17:37.238113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.238139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.238242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.238267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.238460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.238568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.238593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.238851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.239939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.239964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.240017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.903 [2024-12-10 00:17:37.240128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.240152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.240247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.240272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.240442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.240467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.240647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.240672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.240860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.240886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.241949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.241975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.242138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.242163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.242268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.242294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.242466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.242491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.242663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.242689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.242802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.242832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.243216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.243401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.243594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.243793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.243987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.244012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.244123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.244148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.244343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.244368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.244544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.244569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.244682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.903 [2024-12-10 00:17:37.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.903 qpair failed and we were unable to recover it. 00:35:52.903 [2024-12-10 00:17:37.244841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.244866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.244967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.244991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.245224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.245249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.245429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.245613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.245638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.245761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.245786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.245956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.245982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.246084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.246108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.246299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.246325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.246499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.246524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.246692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.246836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.246862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.248199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.248225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.248403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.248428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.904 [2024-12-10 00:17:37.248524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.248550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.248666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.248860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.248886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b9 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:52.904 0 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.249089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.904 [2024-12-10 00:17:37.249203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.249325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.904 [2024-12-10 00:17:37.249523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.249666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.249923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.250205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.250249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.250390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa978000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.250568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.250595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.250772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.250796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.250918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.250942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.251054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.251080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.251326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.251351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.251585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.904 [2024-12-10 00:17:37.251610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.904 qpair failed and we were unable to recover it. 00:35:52.904 [2024-12-10 00:17:37.251782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.251807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.251901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.251927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.252916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.252942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.253128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.253400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.253537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.253745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.253982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.254124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.254509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.254641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.254844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.254870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.255880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.256044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.256196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.256402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.905 [2024-12-10 00:17:37.256649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.256771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.256914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.256940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.905 [2024-12-10 00:17:37.257029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.257053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.257208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.905 [2024-12-10 00:17:37.257338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.257363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.257532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.905 [2024-12-10 00:17:37.257557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.257666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.257690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.257844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.257869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.905 qpair failed and we were unable to recover it. 00:35:52.905 [2024-12-10 00:17:37.258037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.905 [2024-12-10 00:17:37.258062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.258962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.258987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.259892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.259990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.260814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.260996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.261111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.261317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.261433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.261570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.261773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.261798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.262884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.262993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.263018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.263245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.263269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.263364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.263547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.906 [2024-12-10 00:17:37.263572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.906 qpair failed and we were unable to recover it. 00:35:52.906 [2024-12-10 00:17:37.263696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.263721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.263811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.264023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.264211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.264333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.264450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.907 [2024-12-10 00:17:37.264652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.264861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.264886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.907 [2024-12-10 00:17:37.264978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.265003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.265174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.265199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.907 [2024-12-10 00:17:37.265304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.265329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.265486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.907 [2024-12-10 00:17:37.265515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.265753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.265778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.266940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.267920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.907 [2024-12-10 00:17:37.267944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa980000b90 with addr=10.0.0.2, port=4420 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 [2024-12-10 00:17:37.268198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.907 [2024-12-10 00:17:37.270757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.907 [2024-12-10 00:17:37.270872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.907 [2024-12-10 00:17:37.270906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.907 [2024-12-10 00:17:37.270926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.907 [2024-12-10 00:17:37.270944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.907 [2024-12-10 00:17:37.270997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.907 [2024-12-10 00:17:37.280660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.907 [2024-12-10 00:17:37.280749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.907 [2024-12-10 00:17:37.280779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.907 [2024-12-10 00:17:37.280797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.907 [2024-12-10 00:17:37.280813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.907 [2024-12-10 00:17:37.280877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.907 qpair failed and we were unable to recover it. 00:35:52.907 00:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 595600 00:35:52.907 [2024-12-10 00:17:37.290656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.907 [2024-12-10 00:17:37.290725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.907 [2024-12-10 00:17:37.290745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.907 [2024-12-10 00:17:37.290758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.907 [2024-12-10 00:17:37.290769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.907 [2024-12-10 00:17:37.290791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.908 qpair failed and we were unable to recover it. 00:35:52.908 [2024-12-10 00:17:37.300608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.908 [2024-12-10 00:17:37.300672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.908 [2024-12-10 00:17:37.300688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.908 [2024-12-10 00:17:37.300698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.908 [2024-12-10 00:17:37.300707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.908 [2024-12-10 00:17:37.300725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.908 qpair failed and we were unable to recover it. 00:35:52.908 [2024-12-10 00:17:37.310532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.908 [2024-12-10 00:17:37.310594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.908 [2024-12-10 00:17:37.310611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.908 [2024-12-10 00:17:37.310621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.908 [2024-12-10 00:17:37.310629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.908 [2024-12-10 00:17:37.310647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.908 qpair failed and we were unable to recover it. 00:35:52.908 [2024-12-10 00:17:37.320562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.908 [2024-12-10 00:17:37.320618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.908 [2024-12-10 00:17:37.320634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.908 [2024-12-10 00:17:37.320644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.908 [2024-12-10 00:17:37.320653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:52.908 [2024-12-10 00:17:37.320671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.908 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.330646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.330698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.330714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.330724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.330734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.330752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.340692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.340748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.340765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.340774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.340783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.340801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.350721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.350775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.350790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.350803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.350812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.350835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.360749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.360805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.360821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.360837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.360846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.360864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.370828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.370922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.370938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.370948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.370957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.370975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.380796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.380871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.380887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.380896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.380905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.380923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.390853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.390916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.390932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.390942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.390951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.390972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.400854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.400913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.400929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.400939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.400948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.400966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.410891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.410952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.410968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.410978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.410987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.411005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.420911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.420983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.420998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.421008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.421016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.421034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.430868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.430933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.430949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.170 [2024-12-10 00:17:37.430958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.170 [2024-12-10 00:17:37.430967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.170 [2024-12-10 00:17:37.430985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.170 qpair failed and we were unable to recover it. 00:35:53.170 [2024-12-10 00:17:37.440925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.170 [2024-12-10 00:17:37.440989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.170 [2024-12-10 00:17:37.441005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.441015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.441023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.441041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.451094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.451168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.451185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.451194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.451202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.451220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.461089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.461152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.461168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.461177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.461185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.461204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.471060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.471112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.471128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.471137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.471146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.471164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.481106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.481173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.481189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.481202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.481210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.481228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.491051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.491108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.491124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.491134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.491143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.491160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.501143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.501202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.501218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.501228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.501237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.501255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.511171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.511234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.511249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.511259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.511268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.511285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.521129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.521186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.521202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.521212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.521220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.521242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.531197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.531253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.531269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.531278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.531287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.531305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.541279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.541340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.541356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.541366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.541374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.541393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.551319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.551376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.551392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.551402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.551410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.551428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.561347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.561405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.561421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.561432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.561440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.561458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.171 qpair failed and we were unable to recover it. 00:35:53.171 [2024-12-10 00:17:37.571271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.171 [2024-12-10 00:17:37.571328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.171 [2024-12-10 00:17:37.571344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.171 [2024-12-10 00:17:37.571355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.171 [2024-12-10 00:17:37.571363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.171 [2024-12-10 00:17:37.571381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.581369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.581427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.581443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.581453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.581461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.581478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.591391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.591448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.591464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.591474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.591483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.591501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.601417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.601475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.601491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.601500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.601509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.601526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.611463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.611516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.611535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.611545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.611553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.611572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.621514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.621596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.621613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.621622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.621630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.621648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.631456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.631512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.631528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.631538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.631546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.172 [2024-12-10 00:17:37.631564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.172 qpair failed and we were unable to recover it. 00:35:53.172 [2024-12-10 00:17:37.641494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.172 [2024-12-10 00:17:37.641550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.172 [2024-12-10 00:17:37.641566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.172 [2024-12-10 00:17:37.641575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.172 [2024-12-10 00:17:37.641585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.435 [2024-12-10 00:17:37.641603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.435 qpair failed and we were unable to recover it. 00:35:53.435 [2024-12-10 00:17:37.651573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.435 [2024-12-10 00:17:37.651629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.435 [2024-12-10 00:17:37.651646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.435 [2024-12-10 00:17:37.651655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.435 [2024-12-10 00:17:37.651667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.435 [2024-12-10 00:17:37.651684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.435 qpair failed and we were unable to recover it. 00:35:53.435 [2024-12-10 00:17:37.661626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.435 [2024-12-10 00:17:37.661707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.435 [2024-12-10 00:17:37.661723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.435 [2024-12-10 00:17:37.661732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.435 [2024-12-10 00:17:37.661741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.435 [2024-12-10 00:17:37.661759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.435 qpair failed and we were unable to recover it. 00:35:53.435 [2024-12-10 00:17:37.671566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.435 [2024-12-10 00:17:37.671625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.435 [2024-12-10 00:17:37.671640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.435 [2024-12-10 00:17:37.671650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.435 [2024-12-10 00:17:37.671658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.435 [2024-12-10 00:17:37.671676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.435 qpair failed and we were unable to recover it. 00:35:53.435 [2024-12-10 00:17:37.681641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.435 [2024-12-10 00:17:37.681700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.435 [2024-12-10 00:17:37.681717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.435 [2024-12-10 00:17:37.681726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.435 [2024-12-10 00:17:37.681735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.681753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.691618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.691671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.691687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.691697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.691706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.691724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.701729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.701827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.701844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.701854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.701862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.701880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.711665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.711726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.711742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.711752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.711761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.711778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.721774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.721850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.721866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.721876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.721884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.721903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.731803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.731859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.731876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.731886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.731896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.731914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.741851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.741915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.741934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.741944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.741952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.741971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.751912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.751984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.752000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.752010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.752018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.752036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.761909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.761966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.761982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.761991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.762000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.762018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.771912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.771964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.771980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.771990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.772000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.772017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.782007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.782065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.782081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.782091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.782104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.782121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.792011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.792067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.792083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.792092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.792101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.792119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.802015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.802070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.802086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.802096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.802104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.802122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.812033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.812091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.436 [2024-12-10 00:17:37.812107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.436 [2024-12-10 00:17:37.812117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.436 [2024-12-10 00:17:37.812126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.436 [2024-12-10 00:17:37.812144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.436 qpair failed and we were unable to recover it. 00:35:53.436 [2024-12-10 00:17:37.822068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.436 [2024-12-10 00:17:37.822121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.822136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.822146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.822155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.822172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.832105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.832165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.832181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.832191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.832199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.832217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.842127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.842184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.842200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.842210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.842219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.842237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.852157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.852213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.852229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.852239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.852247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.852265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.862196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.862267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.862284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.862293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.862302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.862320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.872205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.872267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.872283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.872293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.872302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.872320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.882284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.882392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.882408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.882417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.882426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.882444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.892254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.892311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.892326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.892336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.892344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.892362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.437 [2024-12-10 00:17:37.902302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.437 [2024-12-10 00:17:37.902414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.437 [2024-12-10 00:17:37.902430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.437 [2024-12-10 00:17:37.902439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.437 [2024-12-10 00:17:37.902448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.437 [2024-12-10 00:17:37.902466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.437 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.912306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.912392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.912408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.912422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.912430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.912448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.922351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.922404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.922421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.922431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.922440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.922458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.932380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.932437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.932453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.932463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.932472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.932489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.942413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.942473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.942488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.942498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.942507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.942524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.952454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.952522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.952538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.952548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.952556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.952577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.962461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.962559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.962575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.962584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.962592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.699 [2024-12-10 00:17:37.962610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-12-10 00:17:37.972491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.699 [2024-12-10 00:17:37.972545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.699 [2024-12-10 00:17:37.972561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.699 [2024-12-10 00:17:37.972571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.699 [2024-12-10 00:17:37.972579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:37.972597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:37.982530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:37.982588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:37.982604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:37.982613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:37.982622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:37.982640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:37.992579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:37.992649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:37.992665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:37.992675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:37.992683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:37.992702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.002583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.002651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.002667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.002677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.002686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.002704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.012663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.012722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.012738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.012748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.012757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.012775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.022651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.022707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.022723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.022734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.022742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.022760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.032679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.032735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.032751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.032761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.032770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.032788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.042727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.042785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.042804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.042814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.042828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.042847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.052760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.052834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.052851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.052860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.052869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.052887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.062775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.062836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.062852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.062863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.062872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.062890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.072791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.072852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.072868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.072877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.072886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.072904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.082812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.082872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.082888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.082897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.082906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.082931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.092841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.092897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.092913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.092922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.092930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.092948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.102891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.700 [2024-12-10 00:17:38.102951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.700 [2024-12-10 00:17:38.102967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.700 [2024-12-10 00:17:38.102977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.700 [2024-12-10 00:17:38.102986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.700 [2024-12-10 00:17:38.103004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-12-10 00:17:38.112905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.112964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.112980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.112990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.112999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.113017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-12-10 00:17:38.122959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.123064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.123080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.123089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.123098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.123116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-12-10 00:17:38.132948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.133003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.133018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.133028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.133037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.133055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-12-10 00:17:38.142971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.143074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.143090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.143100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.143108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.143126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-12-10 00:17:38.153014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.153071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.153087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.153097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.153106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.153124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-12-10 00:17:38.163036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.701 [2024-12-10 00:17:38.163094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.701 [2024-12-10 00:17:38.163109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.701 [2024-12-10 00:17:38.163119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.701 [2024-12-10 00:17:38.163127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.701 [2024-12-10 00:17:38.163144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-10 00:17:38.173057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.962 [2024-12-10 00:17:38.173112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.962 [2024-12-10 00:17:38.173131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.962 [2024-12-10 00:17:38.173141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.962 [2024-12-10 00:17:38.173149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.962 [2024-12-10 00:17:38.173167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-10 00:17:38.183094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.962 [2024-12-10 00:17:38.183150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.962 [2024-12-10 00:17:38.183166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.962 [2024-12-10 00:17:38.183176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.962 [2024-12-10 00:17:38.183184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.962 [2024-12-10 00:17:38.183202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-12-10 00:17:38.193144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.962 [2024-12-10 00:17:38.193204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.962 [2024-12-10 00:17:38.193220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.193230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.193238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.193256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.203143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.203199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.203215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.203225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.203233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.203251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.213161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.213210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.213225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.213234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.213246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.213264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.223197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.223254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.223270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.223280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.223288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.223306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.233257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.233322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.233338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.233348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.233356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.233374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.243255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.243313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.243329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.243338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.243347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.243365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.253274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.253330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.253346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.253355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.253363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.253381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.263314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.263375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.263390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.263400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.263408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.263426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.273347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.273401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.273417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.273426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.273435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.273453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.283365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.283426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.283442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.283452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.283460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.283478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.293334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.293389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.293405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.293415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.293424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.293442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.303433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.303488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.303507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.303516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.303525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.303543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.313500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.313554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.313569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.313579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.313588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.313605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-12-10 00:17:38.323520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.963 [2024-12-10 00:17:38.323578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.963 [2024-12-10 00:17:38.323594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.963 [2024-12-10 00:17:38.323604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.963 [2024-12-10 00:17:38.323612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.963 [2024-12-10 00:17:38.323630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.333503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.333560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.333576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.333586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.333594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.333612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.343535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.343593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.343609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.343619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.343630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.343648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.353586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.353666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.353681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.353691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.353700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.353717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.363614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.363678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.363695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.363704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.363713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.363730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.373606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.373664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.373680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.373690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.373698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.373716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.383640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.383699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.383716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.383726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.383735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.383752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.393688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.393749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.393765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.393775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.393783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.393802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.403701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.403754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.403770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.403780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.403789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.403807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.413774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.413839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.413856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.413865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.413874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.413892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.423816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.423877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.423892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.423902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.423911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.423930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-12-10 00:17:38.433787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.964 [2024-12-10 00:17:38.433851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.964 [2024-12-10 00:17:38.433867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.964 [2024-12-10 00:17:38.433877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.964 [2024-12-10 00:17:38.433885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:53.964 [2024-12-10 00:17:38.433903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.964 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.443806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.443866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.443882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.443892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.443901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.443920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.453851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.453903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.453919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.453928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.453938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.453956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.463882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.463941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.463957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.463967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.463975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.463994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.473891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.473968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.473984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.473997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.474005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.474023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.483929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.483987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.484003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.484013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.484021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.484040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.493966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.494024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.494040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.494050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.494058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.494076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.504002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.504061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.504078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.504087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.504096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.504114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.514024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.514078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.514094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.514103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.514112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.514134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.524042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.524098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.524114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.524124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.524132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.524150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.534135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.534194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.534210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.534220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.534228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.534246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.544180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.226 [2024-12-10 00:17:38.544263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.226 [2024-12-10 00:17:38.544279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.226 [2024-12-10 00:17:38.544288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.226 [2024-12-10 00:17:38.544297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.226 [2024-12-10 00:17:38.544315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.226 qpair failed and we were unable to recover it. 00:35:54.226 [2024-12-10 00:17:38.554144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.554199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.554215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.554224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.554233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.554250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.564106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.564170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.564186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.564195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.564204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.564223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.574177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.574237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.574253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.574263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.574272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.574290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.584156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.584222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.584238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.584248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.584257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.584275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.594252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.594310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.594327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.594337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.594345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.594363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.604261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.604326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.604345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.604355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.604363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.604381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.614312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.614364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.614379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.614389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.614398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.614417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.624409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.624466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.624481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.624492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.624501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.624520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.634370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.634441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.634457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.634466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.634475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.634493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.644470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.644525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.644541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.644550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.644559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.644580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.654434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.654493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.654509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.654519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.654527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.654545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.664473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.664531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.664547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.664557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.664565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.664584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.674486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.674539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.674555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.227 [2024-12-10 00:17:38.674565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.227 [2024-12-10 00:17:38.674573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.227 [2024-12-10 00:17:38.674590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.227 qpair failed and we were unable to recover it. 00:35:54.227 [2024-12-10 00:17:38.684522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.227 [2024-12-10 00:17:38.684576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.227 [2024-12-10 00:17:38.684592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.228 [2024-12-10 00:17:38.684602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.228 [2024-12-10 00:17:38.684611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.228 [2024-12-10 00:17:38.684628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.228 qpair failed and we were unable to recover it. 00:35:54.228 [2024-12-10 00:17:38.694552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.228 [2024-12-10 00:17:38.694608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.228 [2024-12-10 00:17:38.694624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.228 [2024-12-10 00:17:38.694634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.228 [2024-12-10 00:17:38.694642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.228 [2024-12-10 00:17:38.694660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.228 qpair failed and we were unable to recover it. 00:35:54.489 [2024-12-10 00:17:38.704580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.489 [2024-12-10 00:17:38.704640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.489 [2024-12-10 00:17:38.704657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.489 [2024-12-10 00:17:38.704667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.489 [2024-12-10 00:17:38.704676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.489 [2024-12-10 00:17:38.704694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-12-10 00:17:38.714602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.489 [2024-12-10 00:17:38.714653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.489 [2024-12-10 00:17:38.714669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.489 [2024-12-10 00:17:38.714679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.489 [2024-12-10 00:17:38.714687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.489 [2024-12-10 00:17:38.714705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-12-10 00:17:38.724632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.489 [2024-12-10 00:17:38.724684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.489 [2024-12-10 00:17:38.724700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.489 [2024-12-10 00:17:38.724710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.489 [2024-12-10 00:17:38.724719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.489 [2024-12-10 00:17:38.724737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.489 [2024-12-10 00:17:38.734619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.489 [2024-12-10 00:17:38.734712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.489 [2024-12-10 00:17:38.734731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.489 [2024-12-10 00:17:38.734740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.489 [2024-12-10 00:17:38.734748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.489 [2024-12-10 00:17:38.734766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.489 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.744720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.744781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.744797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.744807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.744815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.744838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.754716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.754769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.754784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.754794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.754803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.754821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.764745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.764804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.764819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.764833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.764842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.764860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.774774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.774837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.774852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.774862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.774874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.774892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.784736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.784795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.784811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.784820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.784835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.784853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.794835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.794900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.794915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.794925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.794933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.794951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.804886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.804945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.804961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.804971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.804980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.804998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.814890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.814951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.814967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.814977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.814986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.815004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.824917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.824976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.824993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.825002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.825011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.825029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.834947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.835008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.835025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.835034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.835043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.835060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.844973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.845029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.845045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.845054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.845063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.845082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.855037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.855091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.855107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.855116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.855124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.855142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.865077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.490 [2024-12-10 00:17:38.865139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.490 [2024-12-10 00:17:38.865158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.490 [2024-12-10 00:17:38.865168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.490 [2024-12-10 00:17:38.865176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.490 [2024-12-10 00:17:38.865194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.490 qpair failed and we were unable to recover it. 00:35:54.490 [2024-12-10 00:17:38.875092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.875151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.875167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.875177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.875185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.875203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.885131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.885187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.885203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.885214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.885222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.885242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.895148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.895213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.895229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.895239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.895248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.895265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.905137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.905195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.905211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.905224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.905233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.905252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.915091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.915151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.915167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.915177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.915187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.915205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.925201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.925257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.925273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.925283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.925291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.925309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.935279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.935336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.935352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.935362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.935370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.935388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.945247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.945326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.945343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.945352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.945361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.945379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.491 [2024-12-10 00:17:38.955353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.491 [2024-12-10 00:17:38.955414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.491 [2024-12-10 00:17:38.955429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.491 [2024-12-10 00:17:38.955439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.491 [2024-12-10 00:17:38.955448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.491 [2024-12-10 00:17:38.955466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.491 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:38.965297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:38.965382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:38.965398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:38.965408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:38.965418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:38.965435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:38.975347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:38.975405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:38.975422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:38.975431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:38.975440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:38.975458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:38.985391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:38.985488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:38.985503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:38.985513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:38.985521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:38.985539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:38.995371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:38.995433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:38.995449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:38.995459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:38.995468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:38.995486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:39.005427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:39.005485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:39.005501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:39.005511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:39.005520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:39.005538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:39.015450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:39.015507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:39.015522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:39.015532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:39.015540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:39.015557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:39.025423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:39.025479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:39.025495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:39.025504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:39.025513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:39.025532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:39.035488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.753 [2024-12-10 00:17:39.035544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.753 [2024-12-10 00:17:39.035559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.753 [2024-12-10 00:17:39.035572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.753 [2024-12-10 00:17:39.035581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.753 [2024-12-10 00:17:39.035598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.753 qpair failed and we were unable to recover it. 00:35:54.753 [2024-12-10 00:17:39.045462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.045513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.045529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.045538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.045547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.045565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.055570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.055628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.055643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.055653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.055662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.055680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.065567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.065651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.065668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.065677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.065686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.065704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.075687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.075747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.075762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.075772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.075781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.075802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.085596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.085656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.085672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.085682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.085691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.085709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.095622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.095680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.095695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.095705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.095714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.095731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.105726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.105791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.105807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.105817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.105832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.105850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.115685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.115739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.115755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.115765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.115774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.115793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.125690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.125748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.125764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.125774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.125783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.125801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.135780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.135882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.135898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.135907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.135916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.135934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.145868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.145927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.145943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.145954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.145963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.145982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.155858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.155919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.155935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.155945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.155954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.155971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.165859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.165909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.165928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.754 [2024-12-10 00:17:39.165938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.754 [2024-12-10 00:17:39.165947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.754 [2024-12-10 00:17:39.165965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.754 qpair failed and we were unable to recover it. 00:35:54.754 [2024-12-10 00:17:39.175902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.754 [2024-12-10 00:17:39.175960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.754 [2024-12-10 00:17:39.175976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.755 [2024-12-10 00:17:39.175986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.755 [2024-12-10 00:17:39.175995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.755 [2024-12-10 00:17:39.176013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.755 qpair failed and we were unable to recover it. 00:35:54.755 [2024-12-10 00:17:39.185870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.755 [2024-12-10 00:17:39.185925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.755 [2024-12-10 00:17:39.185941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.755 [2024-12-10 00:17:39.185951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.755 [2024-12-10 00:17:39.185959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.755 [2024-12-10 00:17:39.185978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.755 qpair failed and we were unable to recover it. 00:35:54.755 [2024-12-10 00:17:39.195913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.755 [2024-12-10 00:17:39.195991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.755 [2024-12-10 00:17:39.196008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.755 [2024-12-10 00:17:39.196017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.755 [2024-12-10 00:17:39.196026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.755 [2024-12-10 00:17:39.196044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.755 qpair failed and we were unable to recover it. 00:35:54.755 [2024-12-10 00:17:39.205994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.755 [2024-12-10 00:17:39.206054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.755 [2024-12-10 00:17:39.206070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.755 [2024-12-10 00:17:39.206080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.755 [2024-12-10 00:17:39.206092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.755 [2024-12-10 00:17:39.206110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.755 qpair failed and we were unable to recover it. 00:35:54.755 [2024-12-10 00:17:39.215981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.755 [2024-12-10 00:17:39.216037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.755 [2024-12-10 00:17:39.216053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.755 [2024-12-10 00:17:39.216063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.755 [2024-12-10 00:17:39.216071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:54.755 [2024-12-10 00:17:39.216089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.755 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.226142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.226199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.226216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.226226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.226235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.226253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.236109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.236167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.236183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.236193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.236202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.236220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.246044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.246099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.246115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.246124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.246133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.246151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.256101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.256183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.256199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.256208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.256217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.256235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.266221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.266285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.266301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.266311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.266319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.266337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.276192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.276251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.276267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.276276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.276285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.276303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.286234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.286295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.286311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.286321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.286330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.286347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.296160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.296225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.296245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.296254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.296263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.296281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.306218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.306274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.306290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.306299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.306308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.306326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.316313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.316376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.316393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.316402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.316411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.316428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.326306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.326402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.326418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.326428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.326436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.326454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.336383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.336448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.336464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.336474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.336486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.336504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.346409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.017 [2024-12-10 00:17:39.346503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.017 [2024-12-10 00:17:39.346519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.017 [2024-12-10 00:17:39.346528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.017 [2024-12-10 00:17:39.346537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.017 [2024-12-10 00:17:39.346554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-10 00:17:39.356410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.356469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.356486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.356496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.356504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.356522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.366386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.366445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.366461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.366471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.366480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.366498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.376432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.376489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.376504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.376514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.376523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.376541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.386517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.386571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.386587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.386596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.386605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.386623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.396540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.396654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.396670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.396680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.396688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.396706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.406481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.406540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.406556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.406565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.406574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.406593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.416511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.416564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.416580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.416589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.416598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.416617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.426552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.426609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.426627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.426637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.426646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.426664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.436705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.436764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.436781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.436791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.436799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.436817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.446667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.446724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.446739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.446749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.446757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.446775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.456847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.456950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.456966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.456975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.456983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.457001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.466775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.466843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.466859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.466871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.466880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.466898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.476818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.476881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.476896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.476906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.018 [2024-12-10 00:17:39.476915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.018 [2024-12-10 00:17:39.476932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-10 00:17:39.486852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.018 [2024-12-10 00:17:39.486916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.018 [2024-12-10 00:17:39.486932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.018 [2024-12-10 00:17:39.486942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.019 [2024-12-10 00:17:39.486951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.019 [2024-12-10 00:17:39.486969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.280 [2024-12-10 00:17:39.496825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.280 [2024-12-10 00:17:39.496885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.280 [2024-12-10 00:17:39.496902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.280 [2024-12-10 00:17:39.496911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.280 [2024-12-10 00:17:39.496920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.280 [2024-12-10 00:17:39.496938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-12-10 00:17:39.506860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.506962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.506978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.506988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.506996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.507014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.516884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.516945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.516962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.516972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.516980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.516998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.526930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.526992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.527008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.527018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.527027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.527045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.536943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.536995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.537010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.537020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.537028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.537046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.546969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.547027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.547042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.547052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.547061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.547079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.557006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.557072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.557088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.557098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.557107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.557125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.567018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.567077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.567092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.567101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.567110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.567127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.577041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.577097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.577113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.577122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.577131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.577149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.587095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.587200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.587216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.587226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.587235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.587253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.597109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.281 [2024-12-10 00:17:39.597166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.281 [2024-12-10 00:17:39.597182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.281 [2024-12-10 00:17:39.597198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.281 [2024-12-10 00:17:39.597207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.281 [2024-12-10 00:17:39.597225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-12-10 00:17:39.607127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.607182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.607198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.607208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.607216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.607235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.617163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.617218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.617234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.617244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.617253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.617270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.627199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.627258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.627274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.627283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.627292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.627310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.637233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.637290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.637307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.637316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.637325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.637346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.647228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.647287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.647303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.647313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.647322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.647340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.657255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.657356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.657372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.657382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.657390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.657408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.667289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.667358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.667375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.667384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.667393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.667411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.677341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.677396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.677412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.677422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.677431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.677449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.687342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.687427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.687444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.687453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.687462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.687479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.697375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.282 [2024-12-10 00:17:39.697435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.282 [2024-12-10 00:17:39.697451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.282 [2024-12-10 00:17:39.697461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.282 [2024-12-10 00:17:39.697470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.282 [2024-12-10 00:17:39.697487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-12-10 00:17:39.707344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.283 [2024-12-10 00:17:39.707400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.283 [2024-12-10 00:17:39.707416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.283 [2024-12-10 00:17:39.707426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.283 [2024-12-10 00:17:39.707434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.283 [2024-12-10 00:17:39.707453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-12-10 00:17:39.717434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.283 [2024-12-10 00:17:39.717509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.283 [2024-12-10 00:17:39.717525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.283 [2024-12-10 00:17:39.717535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.283 [2024-12-10 00:17:39.717543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.283 [2024-12-10 00:17:39.717561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-12-10 00:17:39.727468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.283 [2024-12-10 00:17:39.727546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.283 [2024-12-10 00:17:39.727564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.283 [2024-12-10 00:17:39.727574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.283 [2024-12-10 00:17:39.727582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.283 [2024-12-10 00:17:39.727601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-12-10 00:17:39.737507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.283 [2024-12-10 00:17:39.737592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.283 [2024-12-10 00:17:39.737608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.283 [2024-12-10 00:17:39.737617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.283 [2024-12-10 00:17:39.737625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.283 [2024-12-10 00:17:39.737643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-12-10 00:17:39.747533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.283 [2024-12-10 00:17:39.747591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.283 [2024-12-10 00:17:39.747608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.283 [2024-12-10 00:17:39.747617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.283 [2024-12-10 00:17:39.747626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.283 [2024-12-10 00:17:39.747644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.544 [2024-12-10 00:17:39.757535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.544 [2024-12-10 00:17:39.757590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.757606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.757616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.757626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.757644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.767640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.767691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.767707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.767717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.767729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.767747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.777619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.777676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.777691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.777701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.777709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.777727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.787633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.787688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.787704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.787713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.787723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.787740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.797649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.797707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.797723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.797733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.797742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.797759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.807684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.807786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.807802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.807811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.807820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.807843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.817699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.817754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.817770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.817780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.817788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.817807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.827742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.827797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.827813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.827827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.827836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.827854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.837771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.837831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.837846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.837856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.837865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.837882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.847785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.847844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.847861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.847871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.847879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.847897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.857817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.857875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.857894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.857904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.857912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.857931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.867886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.867955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.867971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.867981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.867989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.868007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.545 [2024-12-10 00:17:39.877875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.545 [2024-12-10 00:17:39.877930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.545 [2024-12-10 00:17:39.877946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.545 [2024-12-10 00:17:39.877955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.545 [2024-12-10 00:17:39.877964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.545 [2024-12-10 00:17:39.877982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.545 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.887855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.887910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.887926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.887936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.887945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.887962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.897938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.897996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.898012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.898022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.898033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.898052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.907944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.908003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.908019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.908029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.908037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.908056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.918014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.918070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.918085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.918095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.918104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.918121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.928015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.928072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.928087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.928097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.928105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.928123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.938039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.938098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.938114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.938124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.938132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.938150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.948085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.948142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.948157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.948167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.948175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.948193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.958104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.958160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.958175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.958185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.958194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.958212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.968143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.968199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.968214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.968224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.968232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.968250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.978155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.978207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.978222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.978232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.978240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.978258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.988224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.988330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.988349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.988358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.988366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.988383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:39.998219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:39.998283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:39.998299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:39.998309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:39.998317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.546 [2024-12-10 00:17:39.998335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.546 qpair failed and we were unable to recover it. 00:35:55.546 [2024-12-10 00:17:40.008165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.546 [2024-12-10 00:17:40.008226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.546 [2024-12-10 00:17:40.008242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.546 [2024-12-10 00:17:40.008252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.546 [2024-12-10 00:17:40.008261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.547 [2024-12-10 00:17:40.008279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.547 qpair failed and we were unable to recover it. 00:35:55.807 [2024-12-10 00:17:40.018296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.807 [2024-12-10 00:17:40.018362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.807 [2024-12-10 00:17:40.018382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.807 [2024-12-10 00:17:40.018393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.807 [2024-12-10 00:17:40.018402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.807 [2024-12-10 00:17:40.018423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-12-10 00:17:40.028234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.807 [2024-12-10 00:17:40.028293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.807 [2024-12-10 00:17:40.028310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.807 [2024-12-10 00:17:40.028324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.807 [2024-12-10 00:17:40.028333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.807 [2024-12-10 00:17:40.028352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.807 qpair failed and we were unable to recover it. 00:35:55.807 [2024-12-10 00:17:40.038345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.807 [2024-12-10 00:17:40.038485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.038522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.038555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.038572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.038645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.048424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.048488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.048507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.048518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.048527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.048546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.058404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.058470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.058486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.058496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.058505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.058523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.068422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.068481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.068497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.068506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.068515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.068534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.078512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.078604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.078620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.078629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.078638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.078655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.088413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.088467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.088483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.088493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.088501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.088519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.098538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.098599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.098618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.098628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.098636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.098655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.108582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.108643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.108659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.108669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.108678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.108696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.118605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.118666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.118682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.118693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.118703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.118720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.128572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.128644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.128663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.128672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.128681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.128699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.138627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.138684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.138701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.138711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.138720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.138738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.148666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.148726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.148743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.148753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.148762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.148779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.158710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.158769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.158785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.808 [2024-12-10 00:17:40.158798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.808 [2024-12-10 00:17:40.158807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.808 [2024-12-10 00:17:40.158830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.808 qpair failed and we were unable to recover it. 00:35:55.808 [2024-12-10 00:17:40.168720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.808 [2024-12-10 00:17:40.168774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.808 [2024-12-10 00:17:40.168789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.168799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.168808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.168831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.178804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.178917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.178932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.178942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.178950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.178968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.188802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.188910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.188925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.188935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.188943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.188961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.198794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.198859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.198875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.198885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.198893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.198915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.208850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.208912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.208928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.208938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.208947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.208966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.218885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.218948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.218963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.218973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.218981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.218999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.228890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.228948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.228964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.228974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.228982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.229001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.238906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.238967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.238982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.238993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.239001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.239019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.248943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.249036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.249052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.249061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.249070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.249087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.258982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.259065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.259081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.259090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.259099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.259117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.269046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.269103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.269120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.269129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.269138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.269155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:55.809 [2024-12-10 00:17:40.279029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.809 [2024-12-10 00:17:40.279085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.809 [2024-12-10 00:17:40.279101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.809 [2024-12-10 00:17:40.279111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.809 [2024-12-10 00:17:40.279119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:55.809 [2024-12-10 00:17:40.279137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.809 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.289049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.289100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.289122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.289132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.289140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.289159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.299118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.299179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.299196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.299206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.299214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.299232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.309124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.309182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.309198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.309208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.309217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.309236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.319150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.319235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.319251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.319260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.319269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.319286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.329197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.329257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.329273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.329283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.329295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.329312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.339191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.339264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.339282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.339292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.339301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.339319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.349222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.349305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.349321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.349330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.349339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.349357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.359256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.359326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.359343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.359353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.359361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.359379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.369287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.369344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.369359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.369369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.369378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.369395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.379307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.379365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.379381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.379391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.379399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.379417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.389349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.389413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.389430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.389440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.389448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.389466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.399378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.399437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.399453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.399463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.399472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.399489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.072 qpair failed and we were unable to recover it. 00:35:56.072 [2024-12-10 00:17:40.409392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.072 [2024-12-10 00:17:40.409461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.072 [2024-12-10 00:17:40.409477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.072 [2024-12-10 00:17:40.409486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.072 [2024-12-10 00:17:40.409495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.072 [2024-12-10 00:17:40.409512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.419480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.419545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.419564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.419574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.419582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.419600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.429481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.429578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.429594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.429603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.429611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.429628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.439524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.439583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.439599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.439608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.439617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.439634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.449542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.449607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.449623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.449632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.449641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.449658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.459576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.459641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.459657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.459666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.459678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.459696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.469594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.469652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.469668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.469678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.469686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.469705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.479631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.479703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.479720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.479730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.479739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.479757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.489631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.489683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.489699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.489709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.489718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.489736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.499662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.499722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.499738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.499748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.499757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.499774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.509648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.509739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.509755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.509764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.509773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.509790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.519651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.519710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.519726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.519735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.519744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.519762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.529773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.529835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.529851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.529862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.529870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.529889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.073 [2024-12-10 00:17:40.539785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.073 [2024-12-10 00:17:40.539842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.073 [2024-12-10 00:17:40.539858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.073 [2024-12-10 00:17:40.539868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.073 [2024-12-10 00:17:40.539877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.073 [2024-12-10 00:17:40.539895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.073 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.549829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.549886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.549905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.549915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.549924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.549942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.559790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.559848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.559864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.559875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.559883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.559902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.569889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.569942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.569958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.569967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.569975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.569993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.579948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.580004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.580020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.580029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.580038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.580056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.589903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.589959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.589975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.589988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.589996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.590014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.599979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.600036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.600052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.600062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.600070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.600089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.610025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.610124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.610140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.610149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.610158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.610176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.620064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.620151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.620167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.620177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.620185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.620203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.630073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.630132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.630148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.630158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.630167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.630187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.640086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.640146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.640162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.640172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.640181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.640198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.650142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.650204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.650220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.650230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.650238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.650256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.660143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.336 [2024-12-10 00:17:40.660239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.336 [2024-12-10 00:17:40.660255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.336 [2024-12-10 00:17:40.660265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.336 [2024-12-10 00:17:40.660273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.336 [2024-12-10 00:17:40.660292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.336 qpair failed and we were unable to recover it. 00:35:56.336 [2024-12-10 00:17:40.670183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.670250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.670266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.670275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.670284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.670303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.680210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.680266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.680282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.680291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.680300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.680319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.690234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.690290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.690306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.690315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.690324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.690342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.700249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.700303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.700319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.700329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.700338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.700357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.710237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.710300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.710315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.710325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.710333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.710350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.720321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.720387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.720403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.720416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.720425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.720443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.730332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.730388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.730404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.730414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.730423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.730441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.740437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.740519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.740535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.740545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.740554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.740571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.750459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.750518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.750533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.750543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.750552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.750570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.760430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.760490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.760506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.760516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.760525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.760545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.770388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.770443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.770458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.770467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.770476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.770495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.780539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.780608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.780624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.780634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.780642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.780660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.790558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.790616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.790633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.790643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.337 [2024-12-10 00:17:40.790651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.337 [2024-12-10 00:17:40.790669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.337 qpair failed and we were unable to recover it. 00:35:56.337 [2024-12-10 00:17:40.800590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.337 [2024-12-10 00:17:40.800648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.337 [2024-12-10 00:17:40.800664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.337 [2024-12-10 00:17:40.800674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.338 [2024-12-10 00:17:40.800682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.338 [2024-12-10 00:17:40.800700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.338 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.810581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.810639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.810655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.810664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.810673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.810690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.820654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.820714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.820730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.820740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.820749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.820766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.830573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.830635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.830651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.830661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.830670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.830688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.840663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.840718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.840733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.840742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.840751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.840768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.850693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.850749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.850768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.850777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.850786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.850804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.860719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.860773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.860789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.860799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.860808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.860830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.870817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.870917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.604 [2024-12-10 00:17:40.870933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.604 [2024-12-10 00:17:40.870942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.604 [2024-12-10 00:17:40.870951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.604 [2024-12-10 00:17:40.870969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.604 qpair failed and we were unable to recover it. 00:35:56.604 [2024-12-10 00:17:40.880821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.604 [2024-12-10 00:17:40.880886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.880901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.880910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.880919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.880937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.890811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.890876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.890892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.890902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.890913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.890932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.900850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.900907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.900923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.900933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.900942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.900960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.910846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.910908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.910924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.910934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.910942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.910960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.920891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.920947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.920963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.920972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.920981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.920999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.930934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.931021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.931036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.931046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.931054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.931073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.940979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.941033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.941049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.941058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.941067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.941085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.951019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.951084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.951099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.951108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.951117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.951135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.961026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.961104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.961119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.961128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.961137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.961154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.971076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.971136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.971151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.971161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.971170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.971188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.981074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.981131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.981153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.981163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.981172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.981190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:40.991047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:40.991124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:40.991141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:40.991150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:40.991158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:40.991176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.001062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.001119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.001135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.001145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.001154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.001171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.011086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.011144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.011159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.011169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.011178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.011195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.021096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.021164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.021180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.021189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.021201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.021218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.031181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.031238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.031254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.031263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.031272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.031290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.041237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.041305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.041321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.041330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.041339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.041357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.051240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.051297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.051313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.051323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.051331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.051348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.061267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.061353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.061369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.061378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.605 [2024-12-10 00:17:41.061386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.605 [2024-12-10 00:17:41.061404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.605 qpair failed and we were unable to recover it. 00:35:56.605 [2024-12-10 00:17:41.071321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.605 [2024-12-10 00:17:41.071392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.605 [2024-12-10 00:17:41.071408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.605 [2024-12-10 00:17:41.071417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.606 [2024-12-10 00:17:41.071426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.606 [2024-12-10 00:17:41.071444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.606 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.081351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.081415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.081431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.081441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.081450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.081468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.091357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.091435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.091451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.091461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.091470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.091487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.101322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.101390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.101407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.101416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.101424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.101443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.111412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.111516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.111535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.111544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.111553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.111570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.121458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.121519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.121535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.121545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.121553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.121571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.131507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.131563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.131579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.131588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.131597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.131615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.141580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.141658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.141673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.141683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.141691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.141709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.151565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.151623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.151640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.151654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.151662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.151681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.161575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.161638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.161655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.869 [2024-12-10 00:17:41.161665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.869 [2024-12-10 00:17:41.161673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.869 [2024-12-10 00:17:41.161691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.869 qpair failed and we were unable to recover it. 00:35:56.869 [2024-12-10 00:17:41.171542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.869 [2024-12-10 00:17:41.171616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.869 [2024-12-10 00:17:41.171633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.171643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.171651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.171670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.181603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.181655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.181670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.181680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.181689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.181707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.191641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.191711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.191728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.191737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.191746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.191767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.201696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.201786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.201802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.201812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.201820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.201843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.211690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.211743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.211759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.211768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.211777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.211795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.221736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.221810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.221831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.221841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.221850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.221868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.231755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.231814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.231834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.231843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.231852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.231870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.241837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.241938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.241962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.241972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.241981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.242000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.251809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.251868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.251884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.251894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.251902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.251920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.261885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.261943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.261959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.261969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.261977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.261995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.271886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.271964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.271980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.271990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.271998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.272016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.281947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.282007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.282022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.282036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.282044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.282062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.291927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.291986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.292002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.292012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.870 [2024-12-10 00:17:41.292021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.870 [2024-12-10 00:17:41.292039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.870 qpair failed and we were unable to recover it. 00:35:56.870 [2024-12-10 00:17:41.301990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.870 [2024-12-10 00:17:41.302089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.870 [2024-12-10 00:17:41.302105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.870 [2024-12-10 00:17:41.302115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.871 [2024-12-10 00:17:41.302123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.871 [2024-12-10 00:17:41.302142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.871 qpair failed and we were unable to recover it. 00:35:56.871 [2024-12-10 00:17:41.312009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.871 [2024-12-10 00:17:41.312071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.871 [2024-12-10 00:17:41.312087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.871 [2024-12-10 00:17:41.312096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.871 [2024-12-10 00:17:41.312105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.871 [2024-12-10 00:17:41.312123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.871 qpair failed and we were unable to recover it. 00:35:56.871 [2024-12-10 00:17:41.322028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.871 [2024-12-10 00:17:41.322086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.871 [2024-12-10 00:17:41.322102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.871 [2024-12-10 00:17:41.322112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.871 [2024-12-10 00:17:41.322120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.871 [2024-12-10 00:17:41.322141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.871 qpair failed and we were unable to recover it. 00:35:56.871 [2024-12-10 00:17:41.331973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.871 [2024-12-10 00:17:41.332032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.871 [2024-12-10 00:17:41.332048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.871 [2024-12-10 00:17:41.332058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.871 [2024-12-10 00:17:41.332066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:56.871 [2024-12-10 00:17:41.332084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.871 qpair failed and we were unable to recover it. 00:35:57.132 [2024-12-10 00:17:41.342090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.132 [2024-12-10 00:17:41.342141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.132 [2024-12-10 00:17:41.342158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.132 [2024-12-10 00:17:41.342167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.132 [2024-12-10 00:17:41.342177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.132 [2024-12-10 00:17:41.342195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.132 qpair failed and we were unable to recover it. 00:35:57.132 [2024-12-10 00:17:41.352085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.352140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.352156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.352165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.352174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.352192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.362114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.362173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.362189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.362199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.362207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.362225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.372138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.372192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.372207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.372217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.372225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.372243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.382162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.382233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.382250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.382260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.382268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.382286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.392197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.392254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.392270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.392279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.392288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.392306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.402224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.402286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.402302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.402312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.402320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.402338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.412227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.412281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.412301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.412310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.412319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.412336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.422205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.422260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.422276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.422286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.422295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.422312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.432340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.432399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.432414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.432424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.432433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.432450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.442338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.442398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.442413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.442423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.442431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.442449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.452387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.452443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.452459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.452468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.452480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.452497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.462387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.462462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.462478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.462488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.462497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.462514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.472429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.133 [2024-12-10 00:17:41.472496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.133 [2024-12-10 00:17:41.472512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.133 [2024-12-10 00:17:41.472522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.133 [2024-12-10 00:17:41.472531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.133 [2024-12-10 00:17:41.472549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.133 qpair failed and we were unable to recover it. 00:35:57.133 [2024-12-10 00:17:41.482437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.482544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.482560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.482569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.482578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.482596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.492469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.492523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.492539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.492549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.492557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.492575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.502495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.502546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.502562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.502571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.502580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.502598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.512530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.512616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.512634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.512644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.512653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.512678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.522552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.522619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.522635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.522645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.522653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.522672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.532575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.532634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.532650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.532660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.532668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.532686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.542606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.542707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.542726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.542736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.542744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.542762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.552649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.552705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.552720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.552730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.552739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.552757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.562719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.562832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.562848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.562857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.562866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.562884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.572699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.572753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.572769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.572778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.572787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.572804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.582719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.582777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.582793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.582803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.582814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.582837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.592784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.592868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.592884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.592894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.592903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.592922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.134 [2024-12-10 00:17:41.602725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.134 [2024-12-10 00:17:41.602788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.134 [2024-12-10 00:17:41.602804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.134 [2024-12-10 00:17:41.602813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.134 [2024-12-10 00:17:41.602828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.134 [2024-12-10 00:17:41.602846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.134 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.612806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.612870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.612886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.612896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.612905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.612922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.622853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.622911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.622927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.622937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.622945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.622963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.632877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.632933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.632950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.632959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.632968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.632987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.642907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.642964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.642979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.642989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.642997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.643015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.652915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.652968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.652983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.652993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.653001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.653019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.662945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.663000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.663015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.663025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.663033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.663051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.672986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.673043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.673065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.673075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.396 [2024-12-10 00:17:41.673083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.396 [2024-12-10 00:17:41.673101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.396 qpair failed and we were unable to recover it. 00:35:57.396 [2024-12-10 00:17:41.683058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.396 [2024-12-10 00:17:41.683122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.396 [2024-12-10 00:17:41.683138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.396 [2024-12-10 00:17:41.683148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.683156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.683174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.693087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.693190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.693206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.693215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.693224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.693242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.703046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.703104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.703119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.703129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.703138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.703155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.713094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.713150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.713165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.713178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.713187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.713204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.723051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.723114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.723130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.723140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.723149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.723166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.733151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.733206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.733222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.733232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.733240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.733258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.743207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.743266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.743281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.743291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.743300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.743318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.753211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.753266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.753282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.753292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.753301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.753322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.763255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.763321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.763337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.763347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.763355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.763373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.773268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.773341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.773356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.773367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.773376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.773394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.783266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.783321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.783336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.783346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.783354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.783373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.793365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.793435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.793451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.793460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.793469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.793487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.803328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.803392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.803408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.803418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.803426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.803445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.397 [2024-12-10 00:17:41.813390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.397 [2024-12-10 00:17:41.813448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.397 [2024-12-10 00:17:41.813464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.397 [2024-12-10 00:17:41.813474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.397 [2024-12-10 00:17:41.813484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.397 [2024-12-10 00:17:41.813502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.397 qpair failed and we were unable to recover it. 00:35:57.398 [2024-12-10 00:17:41.823388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.398 [2024-12-10 00:17:41.823447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.398 [2024-12-10 00:17:41.823462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.398 [2024-12-10 00:17:41.823472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.398 [2024-12-10 00:17:41.823480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.398 [2024-12-10 00:17:41.823498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.398 qpair failed and we were unable to recover it. 00:35:57.398 [2024-12-10 00:17:41.833470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.398 [2024-12-10 00:17:41.833577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.398 [2024-12-10 00:17:41.833594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.398 [2024-12-10 00:17:41.833604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.398 [2024-12-10 00:17:41.833613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.398 [2024-12-10 00:17:41.833631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.398 qpair failed and we were unable to recover it. 00:35:57.398 [2024-12-10 00:17:41.843448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.398 [2024-12-10 00:17:41.843517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.398 [2024-12-10 00:17:41.843533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.398 [2024-12-10 00:17:41.843545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.398 [2024-12-10 00:17:41.843553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.398 [2024-12-10 00:17:41.843571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.398 qpair failed and we were unable to recover it. 00:35:57.398 [2024-12-10 00:17:41.853467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.398 [2024-12-10 00:17:41.853557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.398 [2024-12-10 00:17:41.853572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.398 [2024-12-10 00:17:41.853582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.398 [2024-12-10 00:17:41.853590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.398 [2024-12-10 00:17:41.853608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.398 qpair failed and we were unable to recover it. 00:35:57.398 [2024-12-10 00:17:41.863424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.398 [2024-12-10 00:17:41.863512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.398 [2024-12-10 00:17:41.863528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.398 [2024-12-10 00:17:41.863537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.398 [2024-12-10 00:17:41.863545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.398 [2024-12-10 00:17:41.863563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.398 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.873458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.873515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.873531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.873541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.873550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.873568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.883616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.883672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.883688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.883698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.883706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.883726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.893634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.893735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.893752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.893761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.893770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.893787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.903643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.903750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.903766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.903775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.903784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.903802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.913658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.913713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.913728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.913738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.913747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.913764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.923683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.923738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.923754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.923763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.923772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.923790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.933711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.933783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.933800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.933809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.933818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.933840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.943747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.943805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.943820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.943834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.943842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.943860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.953833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.953935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.953951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.953960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.953969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.953987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.963854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.963911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.963927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.963936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.963945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.963964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.973840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.973891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.973910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.973920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.973928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.973947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.983850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.983918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.660 [2024-12-10 00:17:41.983934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.660 [2024-12-10 00:17:41.983944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.660 [2024-12-10 00:17:41.983952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.660 [2024-12-10 00:17:41.983970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.660 qpair failed and we were unable to recover it. 00:35:57.660 [2024-12-10 00:17:41.993939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.660 [2024-12-10 00:17:41.994045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:41.994061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:41.994070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:41.994079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:41.994097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.003899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.003958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.003974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.003984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.003992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.004010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.013936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.013991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.014007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.014017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.014029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.014047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.023965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.024020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.024036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.024046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.024054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.024072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.034009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.034067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.034083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.034093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.034102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.034119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.044038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.044091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.044107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.044116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.044125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.044143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.054109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.054211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.054226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.054236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.054245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.054262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.064076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.064163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.064179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.064188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.064196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.064214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.074172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.074231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.074246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.074256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.074265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.074283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.084165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.084225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.084241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.661 [2024-12-10 00:17:42.084250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.661 [2024-12-10 00:17:42.084259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.661 [2024-12-10 00:17:42.084276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.661 qpair failed and we were unable to recover it. 00:35:57.661 [2024-12-10 00:17:42.094209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.661 [2024-12-10 00:17:42.094295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.661 [2024-12-10 00:17:42.094310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.662 [2024-12-10 00:17:42.094320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.662 [2024-12-10 00:17:42.094329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.662 [2024-12-10 00:17:42.094346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.662 qpair failed and we were unable to recover it. 00:35:57.662 [2024-12-10 00:17:42.104229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.662 [2024-12-10 00:17:42.104321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.662 [2024-12-10 00:17:42.104340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.662 [2024-12-10 00:17:42.104350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.662 [2024-12-10 00:17:42.104358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.662 [2024-12-10 00:17:42.104376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.662 qpair failed and we were unable to recover it. 00:35:57.662 [2024-12-10 00:17:42.114209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.662 [2024-12-10 00:17:42.114268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.662 [2024-12-10 00:17:42.114284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.662 [2024-12-10 00:17:42.114294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.662 [2024-12-10 00:17:42.114302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.662 [2024-12-10 00:17:42.114320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.662 qpair failed and we were unable to recover it. 00:35:57.662 [2024-12-10 00:17:42.124333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.662 [2024-12-10 00:17:42.124420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.662 [2024-12-10 00:17:42.124436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.662 [2024-12-10 00:17:42.124445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.662 [2024-12-10 00:17:42.124454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.662 [2024-12-10 00:17:42.124472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.662 qpair failed and we were unable to recover it. 00:35:57.936 [2024-12-10 00:17:42.134231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.936 [2024-12-10 00:17:42.134285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.936 [2024-12-10 00:17:42.134301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.134311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.134320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.134339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.144357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.144415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.144432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.144442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.144454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.144472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.154408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.154466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.154482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.154492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.154501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.154519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.164388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.164446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.164462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.164472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.164480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.164499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.174410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.174507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.174523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.174532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.174541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.174559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.184438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.184545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.184561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.184570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.184578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.184596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.194444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.194513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.194529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.194539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.194547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.194565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.937 [2024-12-10 00:17:42.204492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.937 [2024-12-10 00:17:42.204553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.937 [2024-12-10 00:17:42.204568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.937 [2024-12-10 00:17:42.204578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.937 [2024-12-10 00:17:42.204587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.937 [2024-12-10 00:17:42.204604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.937 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.214490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.214587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.214603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.214612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.214620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.214639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.224536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.224637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.224652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.224661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.224670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.224688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.234592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.234695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.234711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.234720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.234729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.234747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.244598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.244656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.244672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.244682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.244690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.244708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.254657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.254713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.254728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.254738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.254747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.254765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.264633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.264689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.264705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.264715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.264724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.264742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.274696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.274786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.274801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.274814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.274826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.274845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.938 qpair failed and we were unable to recover it. 00:35:57.938 [2024-12-10 00:17:42.284744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.938 [2024-12-10 00:17:42.284799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.938 [2024-12-10 00:17:42.284815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.938 [2024-12-10 00:17:42.284829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.938 [2024-12-10 00:17:42.284838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.938 [2024-12-10 00:17:42.284856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.294767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.294829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.294845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.294855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.294864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.294882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.304762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.304822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.304844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.304853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.304862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.304880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.315069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.315124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.315140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.315150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.315159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.315180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.324821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.324884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.324900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.324910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.324918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.324937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.334847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.334917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.334934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.334943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.334958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.334977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.344862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.344922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.344937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.344946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.344955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.344973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.354856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.354913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.354929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.354938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.939 [2024-12-10 00:17:42.354947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.939 [2024-12-10 00:17:42.354966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.939 qpair failed and we were unable to recover it. 00:35:57.939 [2024-12-10 00:17:42.364869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.939 [2024-12-10 00:17:42.364930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.939 [2024-12-10 00:17:42.364945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.939 [2024-12-10 00:17:42.364954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.940 [2024-12-10 00:17:42.364963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.940 [2024-12-10 00:17:42.364981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.940 qpair failed and we were unable to recover it. 00:35:57.940 [2024-12-10 00:17:42.374919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.940 [2024-12-10 00:17:42.375010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.940 [2024-12-10 00:17:42.375025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.940 [2024-12-10 00:17:42.375034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.940 [2024-12-10 00:17:42.375042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.940 [2024-12-10 00:17:42.375059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.940 qpair failed and we were unable to recover it. 00:35:57.940 [2024-12-10 00:17:42.384965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.940 [2024-12-10 00:17:42.385018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.940 [2024-12-10 00:17:42.385033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.940 [2024-12-10 00:17:42.385042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.940 [2024-12-10 00:17:42.385051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.940 [2024-12-10 00:17:42.385068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.940 qpair failed and we were unable to recover it. 00:35:57.940 [2024-12-10 00:17:42.395019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.940 [2024-12-10 00:17:42.395077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.940 [2024-12-10 00:17:42.395093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.940 [2024-12-10 00:17:42.395103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.940 [2024-12-10 00:17:42.395112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.940 [2024-12-10 00:17:42.395130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.940 qpair failed and we were unable to recover it. 00:35:57.940 [2024-12-10 00:17:42.404975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.940 [2024-12-10 00:17:42.405032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.940 [2024-12-10 00:17:42.405048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.940 [2024-12-10 00:17:42.405060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.940 [2024-12-10 00:17:42.405069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:57.940 [2024-12-10 00:17:42.405087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:57.940 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.415082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.415138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.415153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.415164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.415172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.415190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.425150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.425207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.425223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.425233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.425241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.425258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.435121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.435193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.435210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.435219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.435227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.435245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.445200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.445257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.445273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.445282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.445291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.445316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.455123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.455184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.455199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.455209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.455218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.455236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.465233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.465286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.465301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.465311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.465320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.465337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.475309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.204 [2024-12-10 00:17:42.475366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.204 [2024-12-10 00:17:42.475381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.204 [2024-12-10 00:17:42.475391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.204 [2024-12-10 00:17:42.475400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.204 [2024-12-10 00:17:42.475418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.204 qpair failed and we were unable to recover it. 00:35:58.204 [2024-12-10 00:17:42.485282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.485340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.485355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.485365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.485374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.205 [2024-12-10 00:17:42.485392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.495312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.495386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.495403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.495413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.495422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa980000b90 00:35:58.205 [2024-12-10 00:17:42.495440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.505379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.505517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.505580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.505615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.505646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa978000b90 00:35:58.205 [2024-12-10 00:17:42.505709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.515378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.515466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.515500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.515522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.515541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa978000b90 00:35:58.205 [2024-12-10 00:17:42.515581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.525469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.525607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.525669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.525705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.525736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa974000b90 00:35:58.205 [2024-12-10 00:17:42.525798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.535473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.535583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.535657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.535693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.535724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x243d000 00:35:58.205 [2024-12-10 00:17:42.535784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.545434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.545520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.545555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.545576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.545595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x243d000 00:35:58.205 [2024-12-10 00:17:42.545631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 [2024-12-10 00:17:42.545868] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:35:58.205 A controller has encountered a failure and is being reset. 00:35:58.205 [2024-12-10 00:17:42.555512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.205 [2024-12-10 00:17:42.555627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.205 [2024-12-10 00:17:42.555679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.205 [2024-12-10 00:17:42.555710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.205 [2024-12-10 00:17:42.555740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa974000b90 00:35:58.205 [2024-12-10 00:17:42.555799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.205 qpair failed and we were unable to recover it. 00:35:58.205 Controller properly reset. 00:35:58.205 Initializing NVMe Controllers 00:35:58.205 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:58.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:58.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:58.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:58.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:58.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:58.205 Initialization complete. Launching workers. 00:35:58.205 Starting thread on core 1 00:35:58.205 Starting thread on core 2 00:35:58.205 Starting thread on core 3 00:35:58.205 Starting thread on core 0 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:58.205 00:35:58.205 real 0m11.383s 00:35:58.205 user 0m21.469s 00:35:58.205 sys 0m5.156s 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:58.205 ************************************ 00:35:58.205 END TEST nvmf_target_disconnect_tc2 00:35:58.205 ************************************ 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:58.205 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:58.205 rmmod nvme_tcp 00:35:58.205 rmmod nvme_fabrics 00:35:58.466 rmmod nvme_keyring 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 596380 ']' 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 596380 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 596380 ']' 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 596380 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596380 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596380' 00:35:58.466 killing process with pid 596380 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 596380 00:35:58.466 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 596380 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.726 00:17:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.642 00:36:00.642 real 0m21.633s 00:36:00.642 user 0m49.342s 00:36:00.642 sys 0m11.362s 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:00.642 ************************************ 00:36:00.642 END TEST nvmf_target_disconnect 00:36:00.642 ************************************ 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:00.642 00:36:00.642 real 6m35.235s 00:36:00.642 user 11m21.932s 00:36:00.642 sys 2m28.078s 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.642 00:17:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.642 ************************************ 00:36:00.642 END TEST nvmf_host 00:36:00.642 ************************************ 00:36:00.920 00:17:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:00.920 00:17:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:00.920 00:17:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:00.920 00:17:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:00.920 00:17:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.920 00:17:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.920 ************************************ 00:36:00.920 START TEST nvmf_target_core_interrupt_mode 00:36:00.920 ************************************ 00:36:00.920 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:00.920 * Looking for test storage... 00:36:00.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.921 --rc genhtml_branch_coverage=1 00:36:00.921 --rc genhtml_function_coverage=1 00:36:00.921 --rc genhtml_legend=1 00:36:00.921 --rc geninfo_all_blocks=1 00:36:00.921 --rc geninfo_unexecuted_blocks=1 00:36:00.921 00:36:00.921 ' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.921 --rc genhtml_branch_coverage=1 00:36:00.921 --rc genhtml_function_coverage=1 00:36:00.921 --rc genhtml_legend=1 00:36:00.921 --rc geninfo_all_blocks=1 00:36:00.921 --rc geninfo_unexecuted_blocks=1 00:36:00.921 00:36:00.921 ' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.921 --rc genhtml_branch_coverage=1 00:36:00.921 --rc genhtml_function_coverage=1 00:36:00.921 --rc genhtml_legend=1 00:36:00.921 --rc geninfo_all_blocks=1 00:36:00.921 --rc geninfo_unexecuted_blocks=1 00:36:00.921 00:36:00.921 ' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:00.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.921 --rc genhtml_branch_coverage=1 00:36:00.921 --rc genhtml_function_coverage=1 00:36:00.921 --rc genhtml_legend=1 00:36:00.921 --rc geninfo_all_blocks=1 00:36:00.921 --rc geninfo_unexecuted_blocks=1 00:36:00.921 00:36:00.921 ' 00:36:00.921 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.208 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:01.209 ************************************ 00:36:01.209 START TEST nvmf_abort 00:36:01.209 ************************************ 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:01.209 * Looking for test storage... 00:36:01.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.209 --rc genhtml_branch_coverage=1 00:36:01.209 --rc genhtml_function_coverage=1 00:36:01.209 --rc genhtml_legend=1 00:36:01.209 --rc geninfo_all_blocks=1 00:36:01.209 --rc geninfo_unexecuted_blocks=1 00:36:01.209 00:36:01.209 ' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.209 --rc genhtml_branch_coverage=1 00:36:01.209 --rc genhtml_function_coverage=1 00:36:01.209 --rc genhtml_legend=1 00:36:01.209 --rc geninfo_all_blocks=1 00:36:01.209 --rc geninfo_unexecuted_blocks=1 00:36:01.209 00:36:01.209 ' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.209 --rc genhtml_branch_coverage=1 00:36:01.209 --rc genhtml_function_coverage=1 00:36:01.209 --rc genhtml_legend=1 00:36:01.209 --rc geninfo_all_blocks=1 00:36:01.209 --rc geninfo_unexecuted_blocks=1 00:36:01.209 00:36:01.209 ' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.209 --rc genhtml_branch_coverage=1 00:36:01.209 --rc genhtml_function_coverage=1 00:36:01.209 --rc genhtml_legend=1 00:36:01.209 --rc geninfo_all_blocks=1 00:36:01.209 --rc geninfo_unexecuted_blocks=1 00:36:01.209 00:36:01.209 ' 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.209 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.494 00:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:08.171 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:08.171 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:08.171 Found net devices under 0000:af:00.0: cvl_0_0 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.171 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:08.172 Found net devices under 0000:af:00.1: cvl_0_1 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.172 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:36:08.433 00:36:08.433 --- 10.0.0.2 ping statistics --- 00:36:08.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.433 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:36:08.433 00:36:08.433 --- 10.0.0.1 ping statistics --- 00:36:08.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.433 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:08.433 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=601224 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 601224 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 601224 ']' 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.693 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:08.693 [2024-12-10 00:17:52.988796] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:08.693 [2024-12-10 00:17:52.989683] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:36:08.693 [2024-12-10 00:17:52.989716] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.693 [2024-12-10 00:17:53.082688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:08.693 [2024-12-10 00:17:53.123166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.693 [2024-12-10 00:17:53.123207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.693 [2024-12-10 00:17:53.123217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.693 [2024-12-10 00:17:53.123228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.693 [2024-12-10 00:17:53.123236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.693 [2024-12-10 00:17:53.124803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:08.693 [2024-12-10 00:17:53.124915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.693 [2024-12-10 00:17:53.124916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:08.953 [2024-12-10 00:17:53.192407] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:08.953 [2024-12-10 00:17:53.193140] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:08.953 [2024-12-10 00:17:53.193224] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:08.953 [2024-12-10 00:17:53.193395] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 [2024-12-10 00:17:53.873848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 Malloc0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 Delay0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 [2024-12-10 00:17:53.969759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.523 00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:09.783 [2024-12-10 00:17:54.106153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:12.317 Initializing NVMe Controllers 00:36:12.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:12.317 controller IO queue size 128 less than required 00:36:12.317 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:12.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:12.317 Initialization complete. Launching workers. 00:36:12.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37139 00:36:12.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37196, failed to submit 66 00:36:12.317 success 37139, unsuccessful 57, failed 0 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.317 rmmod nvme_tcp 00:36:12.317 rmmod nvme_fabrics 00:36:12.317 rmmod nvme_keyring 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 601224 ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 601224 ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601224' 00:36:12.317 killing process with pid 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 601224 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.317 00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.226 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:14.226 00:36:14.226 real 0m13.201s 00:36:14.226 user 0m10.879s 00:36:14.226 sys 0m7.142s 00:36:14.226 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.226 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.226 ************************************ 00:36:14.226 END TEST nvmf_abort 00:36:14.226 ************************************ 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:14.486 ************************************ 00:36:14.486 START TEST nvmf_ns_hotplug_stress 00:36:14.486 ************************************ 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:14.486 * Looking for test storage... 00:36:14.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:14.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.486 --rc genhtml_branch_coverage=1 00:36:14.486 --rc genhtml_function_coverage=1 00:36:14.486 --rc genhtml_legend=1 00:36:14.486 --rc geninfo_all_blocks=1 00:36:14.486 --rc geninfo_unexecuted_blocks=1 00:36:14.486 00:36:14.486 ' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:14.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.486 --rc genhtml_branch_coverage=1 00:36:14.486 --rc genhtml_function_coverage=1 00:36:14.486 --rc genhtml_legend=1 00:36:14.486 --rc geninfo_all_blocks=1 00:36:14.486 --rc geninfo_unexecuted_blocks=1 00:36:14.486 00:36:14.486 ' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:14.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.486 --rc genhtml_branch_coverage=1 00:36:14.486 --rc genhtml_function_coverage=1 00:36:14.486 --rc genhtml_legend=1 00:36:14.486 --rc geninfo_all_blocks=1 00:36:14.486 --rc geninfo_unexecuted_blocks=1 00:36:14.486 00:36:14.486 ' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:14.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.486 --rc genhtml_branch_coverage=1 00:36:14.486 --rc genhtml_function_coverage=1 00:36:14.486 --rc genhtml_legend=1 00:36:14.486 --rc geninfo_all_blocks=1 00:36:14.486 --rc geninfo_unexecuted_blocks=1 00:36:14.486 00:36:14.486 ' 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.486 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.487 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.487 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.487 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:14.748 00:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:22.890 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:22.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:22.890 Found net devices under 0000:af:00.0: cvl_0_0 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:22.890 Found net devices under 0000:af:00.1: cvl_0_1 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:22.890 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.891 00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:36:22.891 00:36:22.891 --- 10.0.0.2 ping statistics --- 00:36:22.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.891 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:36:22.891 00:36:22.891 --- 10.0.0.1 ping statistics --- 00:36:22.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.891 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=605994 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 605994 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 605994 ']' 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.891 00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 [2024-12-10 00:18:06.296699] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:22.891 [2024-12-10 00:18:06.297665] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:36:22.891 [2024-12-10 00:18:06.297700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.891 [2024-12-10 00:18:06.393203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:22.891 [2024-12-10 00:18:06.434668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.891 [2024-12-10 00:18:06.434705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.891 [2024-12-10 00:18:06.434714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.891 [2024-12-10 00:18:06.434723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.891 [2024-12-10 00:18:06.434730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.891 [2024-12-10 00:18:06.436186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:22.891 [2024-12-10 00:18:06.436293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.891 [2024-12-10 00:18:06.436294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.891 [2024-12-10 00:18:06.504572] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:22.891 [2024-12-10 00:18:06.505259] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:22.891 [2024-12-10 00:18:06.505490] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:22.891 [2024-12-10 00:18:06.505592] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:22.891 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:22.891 [2024-12-10 00:18:07.349120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.151 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:23.151 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.410 [2024-12-10 00:18:07.749516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.410 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:23.670 00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:23.670 Malloc0 00:36:23.929 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:23.929 Delay0 00:36:23.929 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.188 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:24.448 NULL1 00:36:24.448 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:24.448 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=606522 00:36:24.448 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:24.448 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:24.448 00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.830 Read completed with error (sct=0, sc=11) 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.830 00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.090 00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:26.090 00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:26.090 true 00:36:26.090 00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:26.090 00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.042 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.303 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:27.303 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:27.303 true 00:36:27.303 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:27.303 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.563 00:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.822 00:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:27.822 00:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:28.081 true 00:36:28.081 00:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:28.081 00:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.017 00:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.284 00:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:29.284 00:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:29.547 true 00:36:29.547 00:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:29.547 00:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.807 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.807 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:29.807 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:30.071 true 00:36:30.071 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:30.071 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.330 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.589 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:30.589 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:30.589 true 00:36:30.589 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:30.589 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.853 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.113 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:31.113 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:31.113 true 00:36:31.372 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:31.372 00:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.317 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.576 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:32.576 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:32.835 true 00:36:32.835 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:32.835 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.093 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.093 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:33.093 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:33.351 true 00:36:33.351 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:33.351 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.610 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.873 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:33.873 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:33.873 true 00:36:33.873 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:33.873 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.135 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.394 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:34.394 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:34.653 true 00:36:34.653 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:34.653 00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.591 00:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.850 00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:35.850 00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:36.109 true 00:36:36.109 00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:36.109 00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.047 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.047 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:37.047 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:37.306 true 00:36:37.306 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:37.306 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.306 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.564 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:37.564 00:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:37.823 true 00:36:37.823 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:37.823 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.759 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:39.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:39.277 true 00:36:39.277 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:39.277 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.214 00:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.214 00:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:40.214 00:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:40.471 true 00:36:40.471 00:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:40.471 00:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.730 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.989 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:40.989 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:40.989 true 00:36:40.989 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:40.989 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.366 00:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.366 00:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:42.366 00:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:42.625 true 00:36:42.625 00:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:42.625 00:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.885 00:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.885 00:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:42.885 00:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:43.144 true 00:36:43.144 00:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:43.144 00:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.523 00:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.524 00:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:44.524 00:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:44.782 true 00:36:44.782 00:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:44.782 00:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.717 00:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.718 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:45.718 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:45.977 true 00:36:45.977 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:45.977 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.236 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.236 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:46.236 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:46.495 true 00:36:46.495 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:46.495 00:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 00:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.874 00:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:47.874 00:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:48.133 true 00:36:48.133 00:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:48.133 00:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.081 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.081 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:49.081 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:49.340 true 00:36:49.340 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:49.340 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.599 00:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.599 00:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:49.599 00:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:49.864 true 00:36:49.864 00:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:49.864 00:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 00:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.244 00:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:51.244 00:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:51.244 true 00:36:51.503 00:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:51.503 00:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.072 00:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.331 00:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:52.331 00:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:52.594 true 00:36:52.594 00:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:52.594 00:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.868 00:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.868 00:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:52.868 00:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:53.127 true 00:36:53.127 00:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:53.127 00:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 00:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:54.509 00:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:54.509 00:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:54.768 true 00:36:54.768 00:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:54.768 00:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.706 00:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.706 Initializing NVMe Controllers 00:36:55.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:55.706 Controller IO queue size 128, less than required. 00:36:55.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:55.706 Controller IO queue size 128, less than required. 00:36:55.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:55.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:55.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:55.706 Initialization complete. Launching workers. 00:36:55.706 ======================================================== 00:36:55.706 Latency(us) 00:36:55.706 Device Information : IOPS MiB/s Average min max 00:36:55.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2181.03 1.06 38441.12 2456.39 1049177.59 00:36:55.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17278.82 8.44 7409.40 1269.28 360184.43 00:36:55.706 ======================================================== 00:36:55.706 Total : 19459.85 9.50 10887.39 1269.28 1049177.59 00:36:55.706 00:36:55.706 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:55.706 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:55.964 true 00:36:55.964 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 606522 00:36:55.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (606522) - No such process 00:36:55.964 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 606522 00:36:55.964 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.223 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:56.482 null0 00:36:56.482 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.482 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.482 00:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:56.740 null1 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:56.740 null2 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.740 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:56.999 null3 00:36:56.999 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.999 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.999 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:57.258 null4 00:36:57.258 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.258 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.258 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:57.517 null5 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:57.517 null6 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.517 00:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:57.777 null7 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:57.777 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 611880 611884 611886 611888 611890 611892 611894 611896 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.778 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.038 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.297 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.298 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.298 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.298 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.298 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.557 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.817 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.078 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.079 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.337 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.337 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.337 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.337 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.338 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.596 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.597 00:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.855 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.856 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.115 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.116 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.375 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.635 00:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:00.894 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.152 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.152 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.152 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.153 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.153 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.153 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.153 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.153 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.410 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.668 00:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.668 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.668 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.927 rmmod nvme_tcp 00:37:01.927 rmmod nvme_fabrics 00:37:01.927 rmmod nvme_keyring 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 605994 ']' 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 605994 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 605994 ']' 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 605994 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605994 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605994' 00:37:01.927 killing process with pid 605994 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 605994 00:37:01.927 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 605994 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:02.187 00:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.094 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:04.094 00:37:04.094 real 0m49.812s 00:37:04.094 user 2m54.759s 00:37:04.094 sys 0m25.973s 00:37:04.094 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.094 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:04.094 ************************************ 00:37:04.094 END TEST nvmf_ns_hotplug_stress 00:37:04.094 ************************************ 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:04.353 ************************************ 00:37:04.353 START TEST nvmf_delete_subsystem 00:37:04.353 ************************************ 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:04.353 * Looking for test storage... 00:37:04.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:04.353 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.613 --rc genhtml_branch_coverage=1 00:37:04.613 --rc genhtml_function_coverage=1 00:37:04.613 --rc genhtml_legend=1 00:37:04.613 --rc geninfo_all_blocks=1 00:37:04.613 --rc geninfo_unexecuted_blocks=1 00:37:04.613 00:37:04.613 ' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.613 --rc genhtml_branch_coverage=1 00:37:04.613 --rc genhtml_function_coverage=1 00:37:04.613 --rc genhtml_legend=1 00:37:04.613 --rc geninfo_all_blocks=1 00:37:04.613 --rc geninfo_unexecuted_blocks=1 00:37:04.613 00:37:04.613 ' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.613 --rc genhtml_branch_coverage=1 00:37:04.613 --rc genhtml_function_coverage=1 00:37:04.613 --rc genhtml_legend=1 00:37:04.613 --rc geninfo_all_blocks=1 00:37:04.613 --rc geninfo_unexecuted_blocks=1 00:37:04.613 00:37:04.613 ' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.613 --rc genhtml_branch_coverage=1 00:37:04.613 --rc genhtml_function_coverage=1 00:37:04.613 --rc genhtml_legend=1 00:37:04.613 --rc geninfo_all_blocks=1 00:37:04.613 --rc geninfo_unexecuted_blocks=1 00:37:04.613 00:37:04.613 ' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.613 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.614 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:12.749 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:12.749 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:12.749 Found net devices under 0000:af:00.0: cvl_0_0 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.749 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:12.750 Found net devices under 0000:af:00.1: cvl_0_1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:12.750 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:12.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:37:12.750 00:37:12.750 --- 10.0.0.2 ping statistics --- 00:37:12.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.750 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:37:12.750 00:37:12.750 --- 10.0.0.1 ping statistics --- 00:37:12.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.750 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=616506 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 616506 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 616506 ']' 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:12.750 00:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.750 [2024-12-10 00:18:56.206776] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:12.750 [2024-12-10 00:18:56.207752] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:37:12.750 [2024-12-10 00:18:56.207789] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.750 [2024-12-10 00:18:56.301437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:12.750 [2024-12-10 00:18:56.340663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.750 [2024-12-10 00:18:56.340702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.750 [2024-12-10 00:18:56.340712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.750 [2024-12-10 00:18:56.340721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.750 [2024-12-10 00:18:56.340728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.750 [2024-12-10 00:18:56.342021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.750 [2024-12-10 00:18:56.342023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.750 [2024-12-10 00:18:56.409641] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:12.750 [2024-12-10 00:18:56.410172] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:12.750 [2024-12-10 00:18:56.410373] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.750 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 [2024-12-10 00:18:57.083385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 [2024-12-10 00:18:57.115255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 NULL1 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 Delay0 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=616543 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:12.751 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:13.011 [2024-12-10 00:18:57.235367] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:14.920 00:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.920 00:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.920 00:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 [2024-12-10 00:18:59.408666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7ae0 is same with the state(6) to be set 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 starting I/O failed: -6 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 [2024-12-10 00:18:59.409678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faf54000c70 is same with the state(6) to be set 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Read completed with error (sct=0, sc=8) 00:37:15.180 Write completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Write completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Write completed with error (sct=0, sc=8) 00:37:15.181 Write completed with error (sct=0, sc=8) 00:37:15.181 Write completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:15.181 Read completed with error (sct=0, sc=8) 00:37:16.118 [2024-12-10 00:19:00.372071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7720 is same with the state(6) to be set 00:37:16.118 Read completed with error (sct=0, sc=8) 00:37:16.118 Write completed with error (sct=0, sc=8) 00:37:16.118 Read completed with error (sct=0, sc=8) 00:37:16.118 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 [2024-12-10 00:19:00.412084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faf5400d050 is same with the state(6) to be set 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 [2024-12-10 00:19:00.412236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faf5400d830 is same with the state(6) to be set 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 [2024-12-10 00:19:00.412760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab6410 is same with the state(6) to be set 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 Read completed with error (sct=0, sc=8) 00:37:16.119 Write completed with error (sct=0, sc=8) 00:37:16.119 [2024-12-10 00:19:00.413453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab7900 is same with the state(6) to be set 00:37:16.119 Initializing NVMe Controllers 00:37:16.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:16.119 Controller IO queue size 128, less than required. 00:37:16.119 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:16.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:16.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:16.119 Initialization complete. Launching workers. 00:37:16.119 ======================================================== 00:37:16.119 Latency(us) 00:37:16.119 Device Information : IOPS MiB/s Average min max 00:37:16.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.38 0.08 913357.12 331.30 1011159.77 00:37:16.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.87 0.08 908934.64 281.25 1042569.60 00:37:16.119 ======================================================== 00:37:16.119 Total : 326.25 0.16 911135.78 281.25 1042569.60 00:37:16.119 00:37:16.119 [2024-12-10 00:19:00.413922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab7720 (9): Bad file descriptor 00:37:16.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:16.119 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.119 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:16.119 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 616543 00:37:16.119 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 616543 00:37:16.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (616543) - No such process 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 616543 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 616543 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 616543 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.688 [2024-12-10 00:19:00.947195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=617294 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:16.688 00:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.688 [2024-12-10 00:19:01.034260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:17.256 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.256 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:17.256 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:17.516 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.516 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:17.516 00:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.085 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.085 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:18.085 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.654 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.654 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:18.654 00:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.221 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.221 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:19.221 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.788 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.788 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:19.788 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.788 Initializing NVMe Controllers 00:37:19.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:19.788 Controller IO queue size 128, less than required. 00:37:19.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:19.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:19.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:19.788 Initialization complete. Launching workers. 00:37:19.788 ======================================================== 00:37:19.788 Latency(us) 00:37:19.788 Device Information : IOPS MiB/s Average min max 00:37:19.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002721.73 1000151.39 1041626.73 00:37:19.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004346.65 1000171.72 1010333.29 00:37:19.788 ======================================================== 00:37:19.788 Total : 256.00 0.12 1003534.19 1000151.39 1041626.73 00:37:19.788 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 617294 00:37:20.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (617294) - No such process 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 617294 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.047 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.047 rmmod nvme_tcp 00:37:20.306 rmmod nvme_fabrics 00:37:20.306 rmmod nvme_keyring 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 616506 ']' 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 616506 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 616506 ']' 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 616506 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 616506 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 616506' 00:37:20.306 killing process with pid 616506 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 616506 00:37:20.306 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 616506 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.566 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.483 00:37:22.483 real 0m18.245s 00:37:22.483 user 0m25.788s 00:37:22.483 sys 0m8.237s 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.483 ************************************ 00:37:22.483 END TEST nvmf_delete_subsystem 00:37:22.483 ************************************ 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.483 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.742 ************************************ 00:37:22.742 START TEST nvmf_host_management 00:37:22.742 ************************************ 00:37:22.742 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:22.742 * Looking for test storage... 00:37:22.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.743 --rc genhtml_legend=1 00:37:22.743 --rc geninfo_all_blocks=1 00:37:22.743 --rc geninfo_unexecuted_blocks=1 00:37:22.743 00:37:22.743 ' 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:22.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.743 --rc genhtml_branch_coverage=1 00:37:22.743 --rc genhtml_function_coverage=1 00:37:22.743 --rc genhtml_legend=1 00:37:22.743 --rc geninfo_all_blocks=1 00:37:22.743 --rc geninfo_unexecuted_blocks=1 00:37:22.743 00:37:22.743 ' 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.743 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.002 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.003 00:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:29.809 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:29.810 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:29.810 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:29.810 Found net devices under 0000:af:00.0: cvl_0_0 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:29.810 Found net devices under 0000:af:00.1: cvl_0_1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.810 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:30.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:37:30.103 00:37:30.103 --- 10.0.0.2 ping statistics --- 00:37:30.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.103 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:37:30.103 00:37:30.103 --- 10.0.0.1 ping statistics --- 00:37:30.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.103 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:30.103 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=621547 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 621547 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 621547 ']' 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.104 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.104 [2024-12-10 00:19:14.496172] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:30.104 [2024-12-10 00:19:14.497170] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:37:30.104 [2024-12-10 00:19:14.497205] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.367 [2024-12-10 00:19:14.576369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:30.367 [2024-12-10 00:19:14.617149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.367 [2024-12-10 00:19:14.617186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.367 [2024-12-10 00:19:14.617195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.368 [2024-12-10 00:19:14.617204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.368 [2024-12-10 00:19:14.617227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.368 [2024-12-10 00:19:14.618963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:30.368 [2024-12-10 00:19:14.619069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:30.368 [2024-12-10 00:19:14.619178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.368 [2024-12-10 00:19:14.619179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:30.368 [2024-12-10 00:19:14.686144] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:30.368 [2024-12-10 00:19:14.686628] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:30.368 [2024-12-10 00:19:14.686678] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:30.368 [2024-12-10 00:19:14.687096] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:30.368 [2024-12-10 00:19:14.687112] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.368 [2024-12-10 00:19:14.767942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.368 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.368 Malloc0 00:37:30.627 [2024-12-10 00:19:14.860222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=621591 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 621591 /var/tmp/bdevperf.sock 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 621591 ']' 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:30.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:30.627 { 00:37:30.627 "params": { 00:37:30.627 "name": "Nvme$subsystem", 00:37:30.627 "trtype": "$TEST_TRANSPORT", 00:37:30.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.627 "adrfam": "ipv4", 00:37:30.627 "trsvcid": "$NVMF_PORT", 00:37:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.627 "hdgst": ${hdgst:-false}, 00:37:30.627 "ddgst": ${ddgst:-false} 00:37:30.627 }, 00:37:30.627 "method": "bdev_nvme_attach_controller" 00:37:30.627 } 00:37:30.627 EOF 00:37:30.627 )") 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:30.627 00:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:30.627 "params": { 00:37:30.627 "name": "Nvme0", 00:37:30.627 "trtype": "tcp", 00:37:30.627 "traddr": "10.0.0.2", 00:37:30.627 "adrfam": "ipv4", 00:37:30.627 "trsvcid": "4420", 00:37:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.627 "hdgst": false, 00:37:30.627 "ddgst": false 00:37:30.627 }, 00:37:30.627 "method": "bdev_nvme_attach_controller" 00:37:30.627 }' 00:37:30.627 [2024-12-10 00:19:14.968745] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:37:30.627 [2024-12-10 00:19:14.968796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621591 ] 00:37:30.627 [2024-12-10 00:19:15.059402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.627 [2024-12-10 00:19:15.098189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.886 Running I/O for 10 seconds... 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:31.456 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.457 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.457 [2024-12-10 00:19:15.875598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.875680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b12f0 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.879790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:31.457 [2024-12-10 00:19:15.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:31.457 [2024-12-10 00:19:15.879856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:31.457 [2024-12-10 00:19:15.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:31.457 [2024-12-10 00:19:15.879894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479760 is same with the state(6) to be set 00:37:31.457 [2024-12-10 00:19:15.879943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.879955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.879980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.879991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.457 [2024-12-10 00:19:15.880467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.457 [2024-12-10 00:19:15.880477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.458 [2024-12-10 00:19:15.880576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:31.458 [2024-12-10 00:19:15.880975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.880985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 [2024-12-10 00:19:15.881188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.458 [2024-12-10 00:19:15.881199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.458 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.458 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.458 [2024-12-10 00:19:15.882108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:31.458 task offset: 8192 on job bdev=Nvme0n1 fails 00:37:31.458 00:37:31.458 Latency(us) 00:37:31.458 [2024-12-09T23:19:15.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.458 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:31.458 Job: Nvme0n1 ended in about 0.55 seconds with error 00:37:31.458 Verification LBA range: start 0x0 length 0x400 00:37:31.459 Nvme0n1 : 0.55 1991.16 124.45 117.13 0.00 29715.28 1913.65 26004.68 00:37:31.459 [2024-12-09T23:19:15.932Z] =================================================================================================================== 00:37:31.459 [2024-12-09T23:19:15.932Z] Total : 1991.16 124.45 117.13 0.00 29715.28 1913.65 26004.68 00:37:31.459 [2024-12-10 00:19:15.884383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:31.459 [2024-12-10 00:19:15.884404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1479760 (9): Bad file descriptor 00:37:31.459 [2024-12-10 00:19:15.885468] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:31.459 [2024-12-10 00:19:15.885543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:31.459 [2024-12-10 00:19:15.885568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.459 [2024-12-10 00:19:15.885586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:31.459 [2024-12-10 00:19:15.885596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:31.459 [2024-12-10 00:19:15.885605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:31.459 [2024-12-10 00:19:15.885617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1479760 00:37:31.459 [2024-12-10 00:19:15.885638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1479760 (9): Bad file descriptor 00:37:31.459 [2024-12-10 00:19:15.885652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:31.459 [2024-12-10 00:19:15.885661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:31.459 [2024-12-10 00:19:15.885672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:31.459 [2024-12-10 00:19:15.885682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:31.459 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.459 00:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 621591 00:37:32.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (621591) - No such process 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:32.840 { 00:37:32.840 "params": { 00:37:32.840 "name": "Nvme$subsystem", 00:37:32.840 "trtype": "$TEST_TRANSPORT", 00:37:32.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:32.840 "adrfam": "ipv4", 00:37:32.840 "trsvcid": "$NVMF_PORT", 00:37:32.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:32.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:32.840 "hdgst": ${hdgst:-false}, 00:37:32.840 "ddgst": ${ddgst:-false} 00:37:32.840 }, 00:37:32.840 "method": "bdev_nvme_attach_controller" 00:37:32.840 } 00:37:32.840 EOF 00:37:32.840 )") 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:32.840 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:32.840 "params": { 00:37:32.840 "name": "Nvme0", 00:37:32.840 "trtype": "tcp", 00:37:32.840 "traddr": "10.0.0.2", 00:37:32.840 "adrfam": "ipv4", 00:37:32.840 "trsvcid": "4420", 00:37:32.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.840 "hdgst": false, 00:37:32.840 "ddgst": false 00:37:32.840 }, 00:37:32.840 "method": "bdev_nvme_attach_controller" 00:37:32.840 }' 00:37:32.840 [2024-12-10 00:19:16.950210] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:37:32.840 [2024-12-10 00:19:16.950260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621887 ] 00:37:32.840 [2024-12-10 00:19:17.040116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.840 [2024-12-10 00:19:17.078751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.840 Running I/O for 1 seconds... 00:37:34.040 2048.00 IOPS, 128.00 MiB/s 00:37:34.040 Latency(us) 00:37:34.040 [2024-12-09T23:19:18.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.040 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:34.040 Verification LBA range: start 0x0 length 0x400 00:37:34.040 Nvme0n1 : 1.02 2077.60 129.85 0.00 0.00 30335.03 5138.02 26109.54 00:37:34.040 [2024-12-09T23:19:18.513Z] =================================================================================================================== 00:37:34.040 [2024-12-09T23:19:18.513Z] Total : 2077.60 129.85 0.00 0.00 30335.03 5138.02 26109.54 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:34.040 rmmod nvme_tcp 00:37:34.040 rmmod nvme_fabrics 00:37:34.040 rmmod nvme_keyring 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 621547 ']' 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 621547 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 621547 ']' 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 621547 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.040 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621547 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621547' 00:37:34.300 killing process with pid 621547 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 621547 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 621547 00:37:34.300 [2024-12-10 00:19:18.730253] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.300 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:36.841 00:37:36.841 real 0m13.865s 00:37:36.841 user 0m18.359s 00:37:36.841 sys 0m8.163s 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:36.841 ************************************ 00:37:36.841 END TEST nvmf_host_management 00:37:36.841 ************************************ 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:36.841 ************************************ 00:37:36.841 START TEST nvmf_lvol 00:37:36.841 ************************************ 00:37:36.841 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:36.841 * Looking for test storage... 00:37:36.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.841 --rc genhtml_branch_coverage=1 00:37:36.841 --rc genhtml_function_coverage=1 00:37:36.841 --rc genhtml_legend=1 00:37:36.841 --rc geninfo_all_blocks=1 00:37:36.841 --rc geninfo_unexecuted_blocks=1 00:37:36.841 00:37:36.841 ' 00:37:36.841 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:36.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.841 --rc genhtml_branch_coverage=1 00:37:36.841 --rc genhtml_function_coverage=1 00:37:36.841 --rc genhtml_legend=1 00:37:36.842 --rc geninfo_all_blocks=1 00:37:36.842 --rc geninfo_unexecuted_blocks=1 00:37:36.842 00:37:36.842 ' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:36.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.842 --rc genhtml_branch_coverage=1 00:37:36.842 --rc genhtml_function_coverage=1 00:37:36.842 --rc genhtml_legend=1 00:37:36.842 --rc geninfo_all_blocks=1 00:37:36.842 --rc geninfo_unexecuted_blocks=1 00:37:36.842 00:37:36.842 ' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:36.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.842 --rc genhtml_branch_coverage=1 00:37:36.842 --rc genhtml_function_coverage=1 00:37:36.842 --rc genhtml_legend=1 00:37:36.842 --rc geninfo_all_blocks=1 00:37:36.842 --rc geninfo_unexecuted_blocks=1 00:37:36.842 00:37:36.842 ' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.842 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:45.003 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.003 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:45.004 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:45.004 Found net devices under 0000:af:00.0: cvl_0_0 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:45.004 Found net devices under 0000:af:00.1: cvl_0_1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:37:45.004 00:37:45.004 --- 10.0.0.2 ping statistics --- 00:37:45.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.004 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:37:45.004 00:37:45.004 --- 10.0.0.1 ping statistics --- 00:37:45.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.004 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=625830 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 625830 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 625830 ']' 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.004 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.004 [2024-12-10 00:19:28.459733] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:45.004 [2024-12-10 00:19:28.460787] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:37:45.004 [2024-12-10 00:19:28.460832] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.004 [2024-12-10 00:19:28.559198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:45.004 [2024-12-10 00:19:28.599379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.004 [2024-12-10 00:19:28.599413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.004 [2024-12-10 00:19:28.599423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.004 [2024-12-10 00:19:28.599434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.004 [2024-12-10 00:19:28.599442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.004 [2024-12-10 00:19:28.600814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.004 [2024-12-10 00:19:28.600923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.004 [2024-12-10 00:19:28.600924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:45.004 [2024-12-10 00:19:28.669516] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:45.004 [2024-12-10 00:19:28.670237] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:45.004 [2024-12-10 00:19:28.670290] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:45.004 [2024-12-10 00:19:28.670463] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.005 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:45.264 [2024-12-10 00:19:29.513775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.264 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:45.553 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:45.553 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:45.553 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:45.553 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:45.811 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:46.070 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bff30613-46c7-4148-9562-897e12f96b74 00:37:46.071 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bff30613-46c7-4148-9562-897e12f96b74 lvol 20 00:37:46.330 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7747823f-0f95-405e-9929-767caf81824d 00:37:46.330 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:46.589 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7747823f-0f95-405e-9929-767caf81824d 00:37:46.589 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.849 [2024-12-10 00:19:31.161692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.849 00:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:47.109 00:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=626384 00:37:47.109 00:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:47.109 00:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:48.047 00:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7747823f-0f95-405e-9929-767caf81824d MY_SNAPSHOT 00:37:48.306 00:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fa818f59-f669-4ea9-ba5c-b1100eccb5c9 00:37:48.306 00:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7747823f-0f95-405e-9929-767caf81824d 30 00:37:48.565 00:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fa818f59-f669-4ea9-ba5c-b1100eccb5c9 MY_CLONE 00:37:48.827 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ce1f9330-1e58-453b-b2d5-30933cf79974 00:37:48.827 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ce1f9330-1e58-453b-b2d5-30933cf79974 00:37:49.089 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 626384 00:37:59.077 Initializing NVMe Controllers 00:37:59.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:59.077 Controller IO queue size 128, less than required. 00:37:59.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:59.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:59.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:59.077 Initialization complete. Launching workers. 00:37:59.077 ======================================================== 00:37:59.077 Latency(us) 00:37:59.077 Device Information : IOPS MiB/s Average min max 00:37:59.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12967.10 50.65 9876.67 1352.63 60343.19 00:37:59.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12764.30 49.86 10029.39 3167.12 57938.42 00:37:59.077 ======================================================== 00:37:59.077 Total : 25731.39 100.51 9952.43 1352.63 60343.19 00:37:59.077 00:37:59.077 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.077 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7747823f-0f95-405e-9929-767caf81824d 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bff30613-46c7-4148-9562-897e12f96b74 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.077 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.078 rmmod nvme_tcp 00:37:59.078 rmmod nvme_fabrics 00:37:59.078 rmmod nvme_keyring 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 625830 ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 625830 ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 625830' 00:37:59.078 killing process with pid 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 625830 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.078 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:00.465 00:38:00.465 real 0m23.870s 00:38:00.465 user 0m54.397s 00:38:00.465 sys 0m12.695s 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:00.465 ************************************ 00:38:00.465 END TEST nvmf_lvol 00:38:00.465 ************************************ 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:00.465 ************************************ 00:38:00.465 START TEST nvmf_lvs_grow 00:38:00.465 ************************************ 00:38:00.465 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:00.730 * Looking for test storage... 00:38:00.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.730 --rc genhtml_branch_coverage=1 00:38:00.730 --rc genhtml_function_coverage=1 00:38:00.730 --rc genhtml_legend=1 00:38:00.730 --rc geninfo_all_blocks=1 00:38:00.730 --rc geninfo_unexecuted_blocks=1 00:38:00.730 00:38:00.730 ' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.730 --rc genhtml_branch_coverage=1 00:38:00.730 --rc genhtml_function_coverage=1 00:38:00.730 --rc genhtml_legend=1 00:38:00.730 --rc geninfo_all_blocks=1 00:38:00.730 --rc geninfo_unexecuted_blocks=1 00:38:00.730 00:38:00.730 ' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.730 --rc genhtml_branch_coverage=1 00:38:00.730 --rc genhtml_function_coverage=1 00:38:00.730 --rc genhtml_legend=1 00:38:00.730 --rc geninfo_all_blocks=1 00:38:00.730 --rc geninfo_unexecuted_blocks=1 00:38:00.730 00:38:00.730 ' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.730 --rc genhtml_branch_coverage=1 00:38:00.730 --rc genhtml_function_coverage=1 00:38:00.730 --rc genhtml_legend=1 00:38:00.730 --rc geninfo_all_blocks=1 00:38:00.730 --rc geninfo_unexecuted_blocks=1 00:38:00.730 00:38:00.730 ' 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.730 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:00.731 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:08.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:08.861 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:08.861 Found net devices under 0000:af:00.0: cvl_0_0 00:38:08.861 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:08.862 Found net devices under 0000:af:00.1: cvl_0_1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:38:08.862 00:38:08.862 --- 10.0.0.2 ping statistics --- 00:38:08.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.862 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:38:08.862 00:38:08.862 --- 10.0.0.1 ping statistics --- 00:38:08.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.862 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=631916 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 631916 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 631916 ']' 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.862 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:08.862 [2024-12-10 00:19:52.493290] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:08.862 [2024-12-10 00:19:52.494310] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:08.862 [2024-12-10 00:19:52.494347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.862 [2024-12-10 00:19:52.587664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.862 [2024-12-10 00:19:52.628063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.862 [2024-12-10 00:19:52.628097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.862 [2024-12-10 00:19:52.628106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.862 [2024-12-10 00:19:52.628115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.862 [2024-12-10 00:19:52.628137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.862 [2024-12-10 00:19:52.628694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.862 [2024-12-10 00:19:52.695813] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:08.862 [2024-12-10 00:19:52.696051] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:08.862 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.862 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:08.862 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:08.862 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:08.862 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:09.122 [2024-12-10 00:19:53.541421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.122 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:09.383 ************************************ 00:38:09.383 START TEST lvs_grow_clean 00:38:09.383 ************************************ 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:09.383 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:09.653 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:09.653 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:09.653 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:09.914 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:09.914 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:09.914 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f lvol 150 00:38:10.173 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c4a2b2d-6583-4760-b0f8-b14c45d27a64 00:38:10.173 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.174 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:10.174 [2024-12-10 00:19:54.597153] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:10.174 [2024-12-10 00:19:54.597297] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:10.174 true 00:38:10.174 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:10.174 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:10.434 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:10.434 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:10.693 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c4a2b2d-6583-4760-b0f8-b14c45d27a64 00:38:10.693 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:10.953 [2024-12-10 00:19:55.333644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.953 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:11.212 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=632485 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 632485 /var/tmp/bdevperf.sock 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 632485 ']' 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:11.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.213 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:11.213 [2024-12-10 00:19:55.592946] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:11.213 [2024-12-10 00:19:55.593006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632485 ] 00:38:11.213 [2024-12-10 00:19:55.684468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.472 [2024-12-10 00:19:55.724409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.472 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.472 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:11.472 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:11.731 Nvme0n1 00:38:11.731 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:11.991 [ 00:38:11.991 { 00:38:11.991 "name": "Nvme0n1", 00:38:11.991 "aliases": [ 00:38:11.991 "0c4a2b2d-6583-4760-b0f8-b14c45d27a64" 00:38:11.991 ], 00:38:11.991 "product_name": "NVMe disk", 00:38:11.991 "block_size": 4096, 00:38:11.991 "num_blocks": 38912, 00:38:11.991 "uuid": "0c4a2b2d-6583-4760-b0f8-b14c45d27a64", 00:38:11.991 "numa_id": 1, 00:38:11.991 "assigned_rate_limits": { 00:38:11.991 "rw_ios_per_sec": 0, 00:38:11.991 "rw_mbytes_per_sec": 0, 00:38:11.991 "r_mbytes_per_sec": 0, 00:38:11.991 "w_mbytes_per_sec": 0 00:38:11.991 }, 00:38:11.991 "claimed": false, 00:38:11.991 "zoned": false, 00:38:11.991 "supported_io_types": { 00:38:11.991 "read": true, 00:38:11.991 "write": true, 00:38:11.991 "unmap": true, 00:38:11.991 "flush": true, 00:38:11.991 "reset": true, 00:38:11.991 "nvme_admin": true, 00:38:11.991 "nvme_io": true, 00:38:11.991 "nvme_io_md": false, 00:38:11.991 "write_zeroes": true, 00:38:11.991 "zcopy": false, 00:38:11.991 "get_zone_info": false, 00:38:11.991 "zone_management": false, 00:38:11.991 "zone_append": false, 00:38:11.991 "compare": true, 00:38:11.991 "compare_and_write": true, 00:38:11.991 "abort": true, 00:38:11.991 "seek_hole": false, 00:38:11.991 "seek_data": false, 00:38:11.991 "copy": true, 00:38:11.991 "nvme_iov_md": false 00:38:11.991 }, 00:38:11.991 "memory_domains": [ 00:38:11.991 { 00:38:11.991 "dma_device_id": "system", 00:38:11.991 "dma_device_type": 1 00:38:11.991 } 00:38:11.991 ], 00:38:11.991 "driver_specific": { 00:38:11.991 "nvme": [ 00:38:11.991 { 00:38:11.991 "trid": { 00:38:11.991 "trtype": "TCP", 00:38:11.991 "adrfam": "IPv4", 00:38:11.991 "traddr": "10.0.0.2", 00:38:11.991 "trsvcid": "4420", 00:38:11.991 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:11.991 }, 00:38:11.991 "ctrlr_data": { 00:38:11.991 "cntlid": 1, 00:38:11.991 "vendor_id": "0x8086", 00:38:11.991 "model_number": "SPDK bdev Controller", 00:38:11.991 "serial_number": "SPDK0", 00:38:11.991 "firmware_revision": "25.01", 00:38:11.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.991 "oacs": { 00:38:11.991 "security": 0, 00:38:11.992 "format": 0, 00:38:11.992 "firmware": 0, 00:38:11.992 "ns_manage": 0 00:38:11.992 }, 00:38:11.992 "multi_ctrlr": true, 00:38:11.992 "ana_reporting": false 00:38:11.992 }, 00:38:11.992 "vs": { 00:38:11.992 "nvme_version": "1.3" 00:38:11.992 }, 00:38:11.992 "ns_data": { 00:38:11.992 "id": 1, 00:38:11.992 "can_share": true 00:38:11.992 } 00:38:11.992 } 00:38:11.992 ], 00:38:11.992 "mp_policy": "active_passive" 00:38:11.992 } 00:38:11.992 } 00:38:11.992 ] 00:38:11.992 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=632513 00:38:11.992 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:11.992 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:11.992 Running I/O for 10 seconds... 00:38:13.372 Latency(us) 00:38:13.372 [2024-12-09T23:19:57.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.372 Nvme0n1 : 1.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:13.372 [2024-12-09T23:19:57.845Z] =================================================================================================================== 00:38:13.372 [2024-12-09T23:19:57.845Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:13.372 00:38:13.941 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:14.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.201 Nvme0n1 : 2.00 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:38:14.201 [2024-12-09T23:19:58.674Z] =================================================================================================================== 00:38:14.201 [2024-12-09T23:19:58.674Z] Total : 23558.50 92.03 0.00 0.00 0.00 0.00 0.00 00:38:14.201 00:38:14.201 true 00:38:14.201 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:14.201 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:14.461 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:14.461 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:14.461 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 632513 00:38:15.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.030 Nvme0n1 : 3.00 23664.33 92.44 0.00 0.00 0.00 0.00 0.00 00:38:15.030 [2024-12-09T23:19:59.503Z] =================================================================================================================== 00:38:15.030 [2024-12-09T23:19:59.503Z] Total : 23664.33 92.44 0.00 0.00 0.00 0.00 0.00 00:38:15.030 00:38:16.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.410 Nvme0n1 : 4.00 23685.50 92.52 0.00 0.00 0.00 0.00 0.00 00:38:16.410 [2024-12-09T23:20:00.883Z] =================================================================================================================== 00:38:16.410 [2024-12-09T23:20:00.883Z] Total : 23685.50 92.52 0.00 0.00 0.00 0.00 0.00 00:38:16.411 00:38:17.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.351 Nvme0n1 : 5.00 23755.80 92.80 0.00 0.00 0.00 0.00 0.00 00:38:17.351 [2024-12-09T23:20:01.824Z] =================================================================================================================== 00:38:17.351 [2024-12-09T23:20:01.824Z] Total : 23755.80 92.80 0.00 0.00 0.00 0.00 0.00 00:38:17.351 00:38:18.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.295 Nvme0n1 : 6.00 23818.17 93.04 0.00 0.00 0.00 0.00 0.00 00:38:18.295 [2024-12-09T23:20:02.768Z] =================================================================================================================== 00:38:18.296 [2024-12-09T23:20:02.769Z] Total : 23818.17 93.04 0.00 0.00 0.00 0.00 0.00 00:38:18.296 00:38:19.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.235 Nvme0n1 : 7.00 23862.71 93.21 0.00 0.00 0.00 0.00 0.00 00:38:19.235 [2024-12-09T23:20:03.708Z] =================================================================================================================== 00:38:19.235 [2024-12-09T23:20:03.708Z] Total : 23862.71 93.21 0.00 0.00 0.00 0.00 0.00 00:38:19.235 00:38:20.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.173 Nvme0n1 : 8.00 23904.12 93.38 0.00 0.00 0.00 0.00 0.00 00:38:20.173 [2024-12-09T23:20:04.646Z] =================================================================================================================== 00:38:20.173 [2024-12-09T23:20:04.646Z] Total : 23904.12 93.38 0.00 0.00 0.00 0.00 0.00 00:38:20.173 00:38:21.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.123 Nvme0n1 : 9.00 23931.11 93.48 0.00 0.00 0.00 0.00 0.00 00:38:21.123 [2024-12-09T23:20:05.596Z] =================================================================================================================== 00:38:21.123 [2024-12-09T23:20:05.597Z] Total : 23931.11 93.48 0.00 0.00 0.00 0.00 0.00 00:38:21.124 00:38:22.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.064 Nvme0n1 : 10.00 23959.10 93.59 0.00 0.00 0.00 0.00 0.00 00:38:22.064 [2024-12-09T23:20:06.537Z] =================================================================================================================== 00:38:22.064 [2024-12-09T23:20:06.537Z] Total : 23959.10 93.59 0.00 0.00 0.00 0.00 0.00 00:38:22.064 00:38:22.064 00:38:22.064 Latency(us) 00:38:22.064 [2024-12-09T23:20:06.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.064 Nvme0n1 : 10.00 23957.51 93.58 0.00 0.00 5339.44 3158.84 27682.41 00:38:22.064 [2024-12-09T23:20:06.537Z] =================================================================================================================== 00:38:22.064 [2024-12-09T23:20:06.537Z] Total : 23957.51 93.58 0.00 0.00 5339.44 3158.84 27682.41 00:38:22.064 { 00:38:22.064 "results": [ 00:38:22.064 { 00:38:22.064 "job": "Nvme0n1", 00:38:22.064 "core_mask": "0x2", 00:38:22.064 "workload": "randwrite", 00:38:22.064 "status": "finished", 00:38:22.064 "queue_depth": 128, 00:38:22.064 "io_size": 4096, 00:38:22.064 "runtime": 10.002625, 00:38:22.064 "iops": 23957.51115332225, 00:38:22.064 "mibps": 93.58402794266505, 00:38:22.064 "io_failed": 0, 00:38:22.064 "io_timeout": 0, 00:38:22.064 "avg_latency_us": 5339.438902104007, 00:38:22.064 "min_latency_us": 3158.8352, 00:38:22.064 "max_latency_us": 27682.4064 00:38:22.064 } 00:38:22.064 ], 00:38:22.064 "core_count": 1 00:38:22.064 } 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 632485 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 632485 ']' 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 632485 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.064 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632485 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632485' 00:38:22.324 killing process with pid 632485 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 632485 00:38:22.324 Received shutdown signal, test time was about 10.000000 seconds 00:38:22.324 00:38:22.324 Latency(us) 00:38:22.324 [2024-12-09T23:20:06.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.324 [2024-12-09T23:20:06.797Z] =================================================================================================================== 00:38:22.324 [2024-12-09T23:20:06.797Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 632485 00:38:22.324 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:22.584 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:22.844 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:22.844 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:23.188 [2024-12-10 00:20:07.497221] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:23.188 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:23.503 request: 00:38:23.503 { 00:38:23.503 "uuid": "958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f", 00:38:23.503 "method": "bdev_lvol_get_lvstores", 00:38:23.503 "req_id": 1 00:38:23.503 } 00:38:23.503 Got JSON-RPC error response 00:38:23.503 response: 00:38:23.503 { 00:38:23.503 "code": -19, 00:38:23.503 "message": "No such device" 00:38:23.503 } 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:23.503 aio_bdev 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c4a2b2d-6583-4760-b0f8-b14c45d27a64 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0c4a2b2d-6583-4760-b0f8-b14c45d27a64 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:23.503 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:23.769 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c4a2b2d-6583-4760-b0f8-b14c45d27a64 -t 2000 00:38:24.029 [ 00:38:24.029 { 00:38:24.029 "name": "0c4a2b2d-6583-4760-b0f8-b14c45d27a64", 00:38:24.029 "aliases": [ 00:38:24.029 "lvs/lvol" 00:38:24.029 ], 00:38:24.029 "product_name": "Logical Volume", 00:38:24.029 "block_size": 4096, 00:38:24.029 "num_blocks": 38912, 00:38:24.029 "uuid": "0c4a2b2d-6583-4760-b0f8-b14c45d27a64", 00:38:24.029 "assigned_rate_limits": { 00:38:24.029 "rw_ios_per_sec": 0, 00:38:24.029 "rw_mbytes_per_sec": 0, 00:38:24.029 "r_mbytes_per_sec": 0, 00:38:24.029 "w_mbytes_per_sec": 0 00:38:24.029 }, 00:38:24.029 "claimed": false, 00:38:24.029 "zoned": false, 00:38:24.029 "supported_io_types": { 00:38:24.029 "read": true, 00:38:24.029 "write": true, 00:38:24.029 "unmap": true, 00:38:24.029 "flush": false, 00:38:24.029 "reset": true, 00:38:24.029 "nvme_admin": false, 00:38:24.029 "nvme_io": false, 00:38:24.029 "nvme_io_md": false, 00:38:24.029 "write_zeroes": true, 00:38:24.029 "zcopy": false, 00:38:24.029 "get_zone_info": false, 00:38:24.029 "zone_management": false, 00:38:24.029 "zone_append": false, 00:38:24.029 "compare": false, 00:38:24.029 "compare_and_write": false, 00:38:24.029 "abort": false, 00:38:24.029 "seek_hole": true, 00:38:24.029 "seek_data": true, 00:38:24.029 "copy": false, 00:38:24.029 "nvme_iov_md": false 00:38:24.029 }, 00:38:24.029 "driver_specific": { 00:38:24.029 "lvol": { 00:38:24.029 "lvol_store_uuid": "958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f", 00:38:24.029 "base_bdev": "aio_bdev", 00:38:24.029 "thin_provision": false, 00:38:24.029 "num_allocated_clusters": 38, 00:38:24.029 "snapshot": false, 00:38:24.029 "clone": false, 00:38:24.029 "esnap_clone": false 00:38:24.029 } 00:38:24.029 } 00:38:24.029 } 00:38:24.029 ] 00:38:24.029 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:24.030 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:24.030 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:24.030 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:24.030 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:24.030 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:24.290 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:24.290 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c4a2b2d-6583-4760-b0f8-b14c45d27a64 00:38:24.548 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 958d7d7c-a95e-4ff5-9e7b-0a00ef69b76f 00:38:24.808 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:24.808 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:24.808 00:38:24.808 real 0m15.649s 00:38:24.808 user 0m14.781s 00:38:24.808 sys 0m1.875s 00:38:24.808 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.808 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.808 ************************************ 00:38:24.808 END TEST lvs_grow_clean 00:38:24.808 ************************************ 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:25.068 ************************************ 00:38:25.068 START TEST lvs_grow_dirty 00:38:25.068 ************************************ 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:25.068 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:25.327 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:25.327 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:25.327 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:25.327 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:25.327 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:25.587 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:25.587 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:25.587 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 lvol 150 00:38:25.845 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:25.845 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:25.845 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:26.105 [2024-12-10 00:20:10.357131] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:26.105 [2024-12-10 00:20:10.357273] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:26.105 true 00:38:26.105 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:26.105 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:26.105 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:26.105 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:26.364 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:26.623 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.882 [2024-12-10 00:20:11.109621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=635057 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 635057 /var/tmp/bdevperf.sock 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 635057 ']' 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:26.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.882 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:27.142 [2024-12-10 00:20:11.358116] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:27.142 [2024-12-10 00:20:11.358169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635057 ] 00:38:27.142 [2024-12-10 00:20:11.447447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.142 [2024-12-10 00:20:11.487358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.142 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.142 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:27.142 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:27.712 Nvme0n1 00:38:27.712 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:27.712 [ 00:38:27.712 { 00:38:27.712 "name": "Nvme0n1", 00:38:27.712 "aliases": [ 00:38:27.712 "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2" 00:38:27.712 ], 00:38:27.712 "product_name": "NVMe disk", 00:38:27.712 "block_size": 4096, 00:38:27.712 "num_blocks": 38912, 00:38:27.712 "uuid": "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2", 00:38:27.712 "numa_id": 1, 00:38:27.712 "assigned_rate_limits": { 00:38:27.712 "rw_ios_per_sec": 0, 00:38:27.712 "rw_mbytes_per_sec": 0, 00:38:27.712 "r_mbytes_per_sec": 0, 00:38:27.712 "w_mbytes_per_sec": 0 00:38:27.712 }, 00:38:27.712 "claimed": false, 00:38:27.712 "zoned": false, 00:38:27.712 "supported_io_types": { 00:38:27.712 "read": true, 00:38:27.712 "write": true, 00:38:27.712 "unmap": true, 00:38:27.712 "flush": true, 00:38:27.712 "reset": true, 00:38:27.712 "nvme_admin": true, 00:38:27.712 "nvme_io": true, 00:38:27.712 "nvme_io_md": false, 00:38:27.712 "write_zeroes": true, 00:38:27.712 "zcopy": false, 00:38:27.712 "get_zone_info": false, 00:38:27.712 "zone_management": false, 00:38:27.712 "zone_append": false, 00:38:27.712 "compare": true, 00:38:27.712 "compare_and_write": true, 00:38:27.712 "abort": true, 00:38:27.712 "seek_hole": false, 00:38:27.712 "seek_data": false, 00:38:27.712 "copy": true, 00:38:27.712 "nvme_iov_md": false 00:38:27.712 }, 00:38:27.712 "memory_domains": [ 00:38:27.712 { 00:38:27.712 "dma_device_id": "system", 00:38:27.712 "dma_device_type": 1 00:38:27.712 } 00:38:27.712 ], 00:38:27.712 "driver_specific": { 00:38:27.712 "nvme": [ 00:38:27.712 { 00:38:27.712 "trid": { 00:38:27.712 "trtype": "TCP", 00:38:27.712 "adrfam": "IPv4", 00:38:27.712 "traddr": "10.0.0.2", 00:38:27.712 "trsvcid": "4420", 00:38:27.712 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:27.712 }, 00:38:27.712 "ctrlr_data": { 00:38:27.712 "cntlid": 1, 00:38:27.712 "vendor_id": "0x8086", 00:38:27.712 "model_number": "SPDK bdev Controller", 00:38:27.712 "serial_number": "SPDK0", 00:38:27.712 "firmware_revision": "25.01", 00:38:27.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.712 "oacs": { 00:38:27.712 "security": 0, 00:38:27.712 "format": 0, 00:38:27.712 "firmware": 0, 00:38:27.712 "ns_manage": 0 00:38:27.712 }, 00:38:27.712 "multi_ctrlr": true, 00:38:27.712 "ana_reporting": false 00:38:27.712 }, 00:38:27.712 "vs": { 00:38:27.712 "nvme_version": "1.3" 00:38:27.712 }, 00:38:27.712 "ns_data": { 00:38:27.712 "id": 1, 00:38:27.712 "can_share": true 00:38:27.712 } 00:38:27.712 } 00:38:27.712 ], 00:38:27.712 "mp_policy": "active_passive" 00:38:27.712 } 00:38:27.712 } 00:38:27.712 ] 00:38:27.712 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=635188 00:38:27.712 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:27.712 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:27.978 Running I/O for 10 seconds... 00:38:28.917 Latency(us) 00:38:28.917 [2024-12-09T23:20:13.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.917 Nvme0n1 : 1.00 23178.00 90.54 0.00 0.00 0.00 0.00 0.00 00:38:28.917 [2024-12-09T23:20:13.390Z] =================================================================================================================== 00:38:28.917 [2024-12-09T23:20:13.390Z] Total : 23178.00 90.54 0.00 0.00 0.00 0.00 0.00 00:38:28.917 00:38:29.857 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:29.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.858 Nvme0n1 : 2.00 23590.50 92.15 0.00 0.00 0.00 0.00 0.00 00:38:29.858 [2024-12-09T23:20:14.331Z] =================================================================================================================== 00:38:29.858 [2024-12-09T23:20:14.331Z] Total : 23590.50 92.15 0.00 0.00 0.00 0.00 0.00 00:38:29.858 00:38:30.117 true 00:38:30.117 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:30.117 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:30.117 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:30.117 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:30.117 00:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 635188 00:38:31.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.055 Nvme0n1 : 3.00 23728.00 92.69 0.00 0.00 0.00 0.00 0.00 00:38:31.055 [2024-12-09T23:20:15.528Z] =================================================================================================================== 00:38:31.055 [2024-12-09T23:20:15.528Z] Total : 23728.00 92.69 0.00 0.00 0.00 0.00 0.00 00:38:31.055 00:38:31.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.993 Nvme0n1 : 4.00 23828.50 93.08 0.00 0.00 0.00 0.00 0.00 00:38:31.993 [2024-12-09T23:20:16.466Z] =================================================================================================================== 00:38:31.993 [2024-12-09T23:20:16.466Z] Total : 23828.50 93.08 0.00 0.00 0.00 0.00 0.00 00:38:31.993 00:38:32.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.936 Nvme0n1 : 5.00 23914.20 93.41 0.00 0.00 0.00 0.00 0.00 00:38:32.936 [2024-12-09T23:20:17.409Z] =================================================================================================================== 00:38:32.936 [2024-12-09T23:20:17.409Z] Total : 23914.20 93.41 0.00 0.00 0.00 0.00 0.00 00:38:32.936 00:38:33.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.875 Nvme0n1 : 6.00 23971.33 93.64 0.00 0.00 0.00 0.00 0.00 00:38:33.875 [2024-12-09T23:20:18.348Z] =================================================================================================================== 00:38:33.875 [2024-12-09T23:20:18.348Z] Total : 23971.33 93.64 0.00 0.00 0.00 0.00 0.00 00:38:33.875 00:38:34.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:34.823 Nvme0n1 : 7.00 24012.14 93.80 0.00 0.00 0.00 0.00 0.00 00:38:34.823 [2024-12-09T23:20:19.296Z] =================================================================================================================== 00:38:34.823 [2024-12-09T23:20:19.296Z] Total : 24012.14 93.80 0.00 0.00 0.00 0.00 0.00 00:38:34.823 00:38:36.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.206 Nvme0n1 : 8.00 24011.00 93.79 0.00 0.00 0.00 0.00 0.00 00:38:36.206 [2024-12-09T23:20:20.679Z] =================================================================================================================== 00:38:36.206 [2024-12-09T23:20:20.679Z] Total : 24011.00 93.79 0.00 0.00 0.00 0.00 0.00 00:38:36.206 00:38:37.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:37.144 Nvme0n1 : 9.00 24038.33 93.90 0.00 0.00 0.00 0.00 0.00 00:38:37.144 [2024-12-09T23:20:21.617Z] =================================================================================================================== 00:38:37.144 [2024-12-09T23:20:21.617Z] Total : 24038.33 93.90 0.00 0.00 0.00 0.00 0.00 00:38:37.144 00:38:38.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:38.085 Nvme0n1 : 10.00 24060.20 93.99 0.00 0.00 0.00 0.00 0.00 00:38:38.085 [2024-12-09T23:20:22.558Z] =================================================================================================================== 00:38:38.085 [2024-12-09T23:20:22.558Z] Total : 24060.20 93.99 0.00 0.00 0.00 0.00 0.00 00:38:38.085 00:38:38.085 00:38:38.085 Latency(us) 00:38:38.085 [2024-12-09T23:20:22.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:38.085 Nvme0n1 : 10.00 24058.50 93.98 0.00 0.00 5316.81 3093.30 28940.70 00:38:38.085 [2024-12-09T23:20:22.558Z] =================================================================================================================== 00:38:38.085 [2024-12-09T23:20:22.558Z] Total : 24058.50 93.98 0.00 0.00 5316.81 3093.30 28940.70 00:38:38.085 { 00:38:38.085 "results": [ 00:38:38.085 { 00:38:38.085 "job": "Nvme0n1", 00:38:38.085 "core_mask": "0x2", 00:38:38.085 "workload": "randwrite", 00:38:38.085 "status": "finished", 00:38:38.085 "queue_depth": 128, 00:38:38.085 "io_size": 4096, 00:38:38.085 "runtime": 10.003409, 00:38:38.085 "iops": 24058.49845787571, 00:38:38.085 "mibps": 93.978509601077, 00:38:38.085 "io_failed": 0, 00:38:38.085 "io_timeout": 0, 00:38:38.085 "avg_latency_us": 5316.80859025957, 00:38:38.085 "min_latency_us": 3093.2992, 00:38:38.085 "max_latency_us": 28940.6976 00:38:38.085 } 00:38:38.085 ], 00:38:38.085 "core_count": 1 00:38:38.085 } 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 635057 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 635057 ']' 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 635057 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 635057 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 635057' 00:38:38.085 killing process with pid 635057 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 635057 00:38:38.085 Received shutdown signal, test time was about 10.000000 seconds 00:38:38.085 00:38:38.085 Latency(us) 00:38:38.085 [2024-12-09T23:20:22.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.085 [2024-12-09T23:20:22.558Z] =================================================================================================================== 00:38:38.085 [2024-12-09T23:20:22.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 635057 00:38:38.085 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:38.345 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.606 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:38.606 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 631916 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 631916 00:38:38.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 631916 Killed "${NVMF_APP[@]}" "$@" 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=637026 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 637026 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 637026 ']' 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.866 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:38.866 [2024-12-10 00:20:23.233759] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:38.866 [2024-12-10 00:20:23.234703] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:38.866 [2024-12-10 00:20:23.234741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:38.866 [2024-12-10 00:20:23.329334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.126 [2024-12-10 00:20:23.367903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:39.126 [2024-12-10 00:20:23.367941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:39.126 [2024-12-10 00:20:23.367951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:39.126 [2024-12-10 00:20:23.367960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:39.126 [2024-12-10 00:20:23.367971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:39.126 [2024-12-10 00:20:23.368548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.126 [2024-12-10 00:20:23.437119] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:39.126 [2024-12-10 00:20:23.437321] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:39.695 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:39.954 [2024-12-10 00:20:24.314703] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:39.955 [2024-12-10 00:20:24.314940] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:39.955 [2024-12-10 00:20:24.315035] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:39.955 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:40.213 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 -t 2000 00:38:40.473 [ 00:38:40.473 { 00:38:40.473 "name": "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2", 00:38:40.473 "aliases": [ 00:38:40.473 "lvs/lvol" 00:38:40.473 ], 00:38:40.473 "product_name": "Logical Volume", 00:38:40.473 "block_size": 4096, 00:38:40.473 "num_blocks": 38912, 00:38:40.473 "uuid": "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2", 00:38:40.473 "assigned_rate_limits": { 00:38:40.473 "rw_ios_per_sec": 0, 00:38:40.473 "rw_mbytes_per_sec": 0, 00:38:40.473 "r_mbytes_per_sec": 0, 00:38:40.473 "w_mbytes_per_sec": 0 00:38:40.473 }, 00:38:40.473 "claimed": false, 00:38:40.473 "zoned": false, 00:38:40.473 "supported_io_types": { 00:38:40.473 "read": true, 00:38:40.473 "write": true, 00:38:40.473 "unmap": true, 00:38:40.473 "flush": false, 00:38:40.473 "reset": true, 00:38:40.473 "nvme_admin": false, 00:38:40.473 "nvme_io": false, 00:38:40.473 "nvme_io_md": false, 00:38:40.473 "write_zeroes": true, 00:38:40.473 "zcopy": false, 00:38:40.473 "get_zone_info": false, 00:38:40.473 "zone_management": false, 00:38:40.473 "zone_append": false, 00:38:40.473 "compare": false, 00:38:40.473 "compare_and_write": false, 00:38:40.473 "abort": false, 00:38:40.473 "seek_hole": true, 00:38:40.473 "seek_data": true, 00:38:40.473 "copy": false, 00:38:40.473 "nvme_iov_md": false 00:38:40.473 }, 00:38:40.473 "driver_specific": { 00:38:40.473 "lvol": { 00:38:40.473 "lvol_store_uuid": "6d4bbeac-72ef-455a-abf6-d94313f0bb79", 00:38:40.473 "base_bdev": "aio_bdev", 00:38:40.473 "thin_provision": false, 00:38:40.473 "num_allocated_clusters": 38, 00:38:40.473 "snapshot": false, 00:38:40.473 "clone": false, 00:38:40.473 "esnap_clone": false 00:38:40.473 } 00:38:40.474 } 00:38:40.474 } 00:38:40.474 ] 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:40.474 00:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:40.736 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:40.736 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:41.003 [2024-12-10 00:20:25.305059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.003 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.004 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.004 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:41.004 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:41.004 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:41.263 request: 00:38:41.263 { 00:38:41.263 "uuid": "6d4bbeac-72ef-455a-abf6-d94313f0bb79", 00:38:41.263 "method": "bdev_lvol_get_lvstores", 00:38:41.263 "req_id": 1 00:38:41.263 } 00:38:41.263 Got JSON-RPC error response 00:38:41.263 response: 00:38:41.263 { 00:38:41.263 "code": -19, 00:38:41.263 "message": "No such device" 00:38:41.263 } 00:38:41.263 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:41.263 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.263 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.263 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.263 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:41.263 aio_bdev 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:41.523 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 -t 2000 00:38:41.783 [ 00:38:41.783 { 00:38:41.783 "name": "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2", 00:38:41.783 "aliases": [ 00:38:41.783 "lvs/lvol" 00:38:41.783 ], 00:38:41.783 "product_name": "Logical Volume", 00:38:41.783 "block_size": 4096, 00:38:41.783 "num_blocks": 38912, 00:38:41.783 "uuid": "9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2", 00:38:41.783 "assigned_rate_limits": { 00:38:41.783 "rw_ios_per_sec": 0, 00:38:41.783 "rw_mbytes_per_sec": 0, 00:38:41.783 "r_mbytes_per_sec": 0, 00:38:41.783 "w_mbytes_per_sec": 0 00:38:41.783 }, 00:38:41.783 "claimed": false, 00:38:41.783 "zoned": false, 00:38:41.783 "supported_io_types": { 00:38:41.783 "read": true, 00:38:41.783 "write": true, 00:38:41.783 "unmap": true, 00:38:41.783 "flush": false, 00:38:41.783 "reset": true, 00:38:41.783 "nvme_admin": false, 00:38:41.783 "nvme_io": false, 00:38:41.783 "nvme_io_md": false, 00:38:41.783 "write_zeroes": true, 00:38:41.783 "zcopy": false, 00:38:41.783 "get_zone_info": false, 00:38:41.783 "zone_management": false, 00:38:41.783 "zone_append": false, 00:38:41.783 "compare": false, 00:38:41.783 "compare_and_write": false, 00:38:41.783 "abort": false, 00:38:41.783 "seek_hole": true, 00:38:41.783 "seek_data": true, 00:38:41.783 "copy": false, 00:38:41.783 "nvme_iov_md": false 00:38:41.783 }, 00:38:41.783 "driver_specific": { 00:38:41.783 "lvol": { 00:38:41.783 "lvol_store_uuid": "6d4bbeac-72ef-455a-abf6-d94313f0bb79", 00:38:41.783 "base_bdev": "aio_bdev", 00:38:41.783 "thin_provision": false, 00:38:41.783 "num_allocated_clusters": 38, 00:38:41.783 "snapshot": false, 00:38:41.783 "clone": false, 00:38:41.783 "esnap_clone": false 00:38:41.783 } 00:38:41.783 } 00:38:41.783 } 00:38:41.783 ] 00:38:41.783 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:41.783 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:41.783 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:42.043 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:42.043 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:42.043 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:42.043 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:42.043 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a983fe9-8a0e-40f3-b5ea-b64afa94ecf2 00:38:42.303 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d4bbeac-72ef-455a-abf6-d94313f0bb79 00:38:42.562 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:42.821 00:38:42.821 real 0m17.741s 00:38:42.821 user 0m33.907s 00:38:42.821 sys 0m4.654s 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:42.821 ************************************ 00:38:42.821 END TEST lvs_grow_dirty 00:38:42.821 ************************************ 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:42.821 nvmf_trace.0 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.821 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.821 rmmod nvme_tcp 00:38:42.821 rmmod nvme_fabrics 00:38:42.822 rmmod nvme_keyring 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 637026 ']' 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 637026 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 637026 ']' 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 637026 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.822 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637026 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637026' 00:38:43.082 killing process with pid 637026 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 637026 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 637026 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.082 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.622 00:38:45.622 real 0m44.691s 00:38:45.622 user 0m51.704s 00:38:45.622 sys 0m12.671s 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:45.622 ************************************ 00:38:45.622 END TEST nvmf_lvs_grow 00:38:45.622 ************************************ 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.622 ************************************ 00:38:45.622 START TEST nvmf_bdev_io_wait 00:38:45.622 ************************************ 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:45.622 * Looking for test storage... 00:38:45.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:45.622 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.623 --rc genhtml_branch_coverage=1 00:38:45.623 --rc genhtml_function_coverage=1 00:38:45.623 --rc genhtml_legend=1 00:38:45.623 --rc geninfo_all_blocks=1 00:38:45.623 --rc geninfo_unexecuted_blocks=1 00:38:45.623 00:38:45.623 ' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.623 --rc genhtml_branch_coverage=1 00:38:45.623 --rc genhtml_function_coverage=1 00:38:45.623 --rc genhtml_legend=1 00:38:45.623 --rc geninfo_all_blocks=1 00:38:45.623 --rc geninfo_unexecuted_blocks=1 00:38:45.623 00:38:45.623 ' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.623 --rc genhtml_branch_coverage=1 00:38:45.623 --rc genhtml_function_coverage=1 00:38:45.623 --rc genhtml_legend=1 00:38:45.623 --rc geninfo_all_blocks=1 00:38:45.623 --rc geninfo_unexecuted_blocks=1 00:38:45.623 00:38:45.623 ' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:45.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.623 --rc genhtml_branch_coverage=1 00:38:45.623 --rc genhtml_function_coverage=1 00:38:45.623 --rc genhtml_legend=1 00:38:45.623 --rc geninfo_all_blocks=1 00:38:45.623 --rc geninfo_unexecuted_blocks=1 00:38:45.623 00:38:45.623 ' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.623 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.757 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:53.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:53.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:53.758 Found net devices under 0000:af:00.0: cvl_0_0 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:53.758 Found net devices under 0000:af:00.1: cvl_0_1 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:53.758 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:53.759 00:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:53.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:53.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:38:53.759 00:38:53.759 --- 10.0.0.2 ping statistics --- 00:38:53.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.759 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:53.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:53.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:38:53.759 00:38:53.759 --- 10.0.0.1 ping statistics --- 00:38:53.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.759 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=641309 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 641309 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 641309 ']' 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:53.759 00:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.759 [2024-12-10 00:20:37.255346] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:53.759 [2024-12-10 00:20:37.256301] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:53.759 [2024-12-10 00:20:37.256341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.759 [2024-12-10 00:20:37.337982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:53.759 [2024-12-10 00:20:37.380666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.759 [2024-12-10 00:20:37.380707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.759 [2024-12-10 00:20:37.380716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.759 [2024-12-10 00:20:37.380726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.759 [2024-12-10 00:20:37.380733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.759 [2024-12-10 00:20:37.382311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.759 [2024-12-10 00:20:37.382357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:53.759 [2024-12-10 00:20:37.382472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.759 [2024-12-10 00:20:37.382473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:53.759 [2024-12-10 00:20:37.386182] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:53.759 [2024-12-10 00:20:38.223975] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:53.759 [2024-12-10 00:20:38.224131] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:53.759 [2024-12-10 00:20:38.224625] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:53.759 [2024-12-10 00:20:38.224942] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:53.759 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:54.019 [2024-12-10 00:20:38.234588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:54.019 Malloc0 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.019 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:54.020 [2024-12-10 00:20:38.307007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=641590 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=641592 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.020 { 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme$subsystem", 00:38:54.020 "trtype": "$TEST_TRANSPORT", 00:38:54.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.020 "adrfam": "ipv4", 00:38:54.020 "trsvcid": "$NVMF_PORT", 00:38:54.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.020 "hdgst": ${hdgst:-false}, 00:38:54.020 "ddgst": ${ddgst:-false} 00:38:54.020 }, 00:38:54.020 "method": "bdev_nvme_attach_controller" 00:38:54.020 } 00:38:54.020 EOF 00:38:54.020 )") 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=641594 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.020 { 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme$subsystem", 00:38:54.020 "trtype": "$TEST_TRANSPORT", 00:38:54.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.020 "adrfam": "ipv4", 00:38:54.020 "trsvcid": "$NVMF_PORT", 00:38:54.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.020 "hdgst": ${hdgst:-false}, 00:38:54.020 "ddgst": ${ddgst:-false} 00:38:54.020 }, 00:38:54.020 "method": "bdev_nvme_attach_controller" 00:38:54.020 } 00:38:54.020 EOF 00:38:54.020 )") 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=641597 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.020 { 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme$subsystem", 00:38:54.020 "trtype": "$TEST_TRANSPORT", 00:38:54.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.020 "adrfam": "ipv4", 00:38:54.020 "trsvcid": "$NVMF_PORT", 00:38:54.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.020 "hdgst": ${hdgst:-false}, 00:38:54.020 "ddgst": ${ddgst:-false} 00:38:54.020 }, 00:38:54.020 "method": "bdev_nvme_attach_controller" 00:38:54.020 } 00:38:54.020 EOF 00:38:54.020 )") 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.020 { 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme$subsystem", 00:38:54.020 "trtype": "$TEST_TRANSPORT", 00:38:54.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.020 "adrfam": "ipv4", 00:38:54.020 "trsvcid": "$NVMF_PORT", 00:38:54.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.020 "hdgst": ${hdgst:-false}, 00:38:54.020 "ddgst": ${ddgst:-false} 00:38:54.020 }, 00:38:54.020 "method": "bdev_nvme_attach_controller" 00:38:54.020 } 00:38:54.020 EOF 00:38:54.020 )") 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 641590 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme1", 00:38:54.020 "trtype": "tcp", 00:38:54.020 "traddr": "10.0.0.2", 00:38:54.020 "adrfam": "ipv4", 00:38:54.020 "trsvcid": "4420", 00:38:54.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.020 "hdgst": false, 00:38:54.020 "ddgst": false 00:38:54.020 }, 00:38:54.020 "method": "bdev_nvme_attach_controller" 00:38:54.020 }' 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:54.020 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.020 "params": { 00:38:54.020 "name": "Nvme1", 00:38:54.020 "trtype": "tcp", 00:38:54.020 "traddr": "10.0.0.2", 00:38:54.021 "adrfam": "ipv4", 00:38:54.021 "trsvcid": "4420", 00:38:54.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.021 "hdgst": false, 00:38:54.021 "ddgst": false 00:38:54.021 }, 00:38:54.021 "method": "bdev_nvme_attach_controller" 00:38:54.021 }' 00:38:54.021 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:54.021 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.021 "params": { 00:38:54.021 "name": "Nvme1", 00:38:54.021 "trtype": "tcp", 00:38:54.021 "traddr": "10.0.0.2", 00:38:54.021 "adrfam": "ipv4", 00:38:54.021 "trsvcid": "4420", 00:38:54.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.021 "hdgst": false, 00:38:54.021 "ddgst": false 00:38:54.021 }, 00:38:54.021 "method": "bdev_nvme_attach_controller" 00:38:54.021 }' 00:38:54.021 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:54.021 00:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.021 "params": { 00:38:54.021 "name": "Nvme1", 00:38:54.021 "trtype": "tcp", 00:38:54.021 "traddr": "10.0.0.2", 00:38:54.021 "adrfam": "ipv4", 00:38:54.021 "trsvcid": "4420", 00:38:54.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.021 "hdgst": false, 00:38:54.021 "ddgst": false 00:38:54.021 }, 00:38:54.021 "method": "bdev_nvme_attach_controller" 00:38:54.021 }' 00:38:54.021 [2024-12-10 00:20:38.362485] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:54.021 [2024-12-10 00:20:38.362543] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:54.021 [2024-12-10 00:20:38.365745] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:54.021 [2024-12-10 00:20:38.365748] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:54.021 [2024-12-10 00:20:38.365797] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 00:20:38.365798] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:54.021 --proc-type=auto ] 00:38:54.021 [2024-12-10 00:20:38.368393] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:38:54.021 [2024-12-10 00:20:38.368442] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:54.281 [2024-12-10 00:20:38.567268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.281 [2024-12-10 00:20:38.626984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:54.281 [2024-12-10 00:20:38.635479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.281 [2024-12-10 00:20:38.676596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:54.281 [2024-12-10 00:20:38.686498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.281 [2024-12-10 00:20:38.722201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:54.539 [2024-12-10 00:20:38.784806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.539 [2024-12-10 00:20:38.838536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:54.540 Running I/O for 1 seconds... 00:38:54.540 Running I/O for 1 seconds... 00:38:54.798 Running I/O for 1 seconds... 00:38:54.798 Running I/O for 1 seconds... 00:38:55.734 8429.00 IOPS, 32.93 MiB/s 00:38:55.734 Latency(us) 00:38:55.734 [2024-12-09T23:20:40.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.734 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:55.734 Nvme1n1 : 1.02 8425.98 32.91 0.00 0.00 15082.92 3342.34 25585.25 00:38:55.734 [2024-12-09T23:20:40.207Z] =================================================================================================================== 00:38:55.734 [2024-12-09T23:20:40.207Z] Total : 8425.98 32.91 0.00 0.00 15082.92 3342.34 25585.25 00:38:55.734 247760.00 IOPS, 967.81 MiB/s 00:38:55.734 Latency(us) 00:38:55.734 [2024-12-09T23:20:40.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.734 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:55.734 Nvme1n1 : 1.00 247385.29 966.35 0.00 0.00 514.64 217.91 1487.67 00:38:55.734 [2024-12-09T23:20:40.207Z] =================================================================================================================== 00:38:55.734 [2024-12-09T23:20:40.207Z] Total : 247385.29 966.35 0.00 0.00 514.64 217.91 1487.67 00:38:55.734 7841.00 IOPS, 30.63 MiB/s 00:38:55.734 Latency(us) 00:38:55.734 [2024-12-09T23:20:40.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.734 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:55.734 Nvme1n1 : 1.01 7947.94 31.05 0.00 0.00 16061.39 4508.88 25060.97 00:38:55.734 [2024-12-09T23:20:40.207Z] =================================================================================================================== 00:38:55.734 [2024-12-09T23:20:40.207Z] Total : 7947.94 31.05 0.00 0.00 16061.39 4508.88 25060.97 00:38:55.734 13351.00 IOPS, 52.15 MiB/s 00:38:55.734 Latency(us) 00:38:55.734 [2024-12-09T23:20:40.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.734 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:55.734 Nvme1n1 : 1.01 13446.98 52.53 0.00 0.00 9497.25 3237.48 13736.35 00:38:55.734 [2024-12-09T23:20:40.207Z] =================================================================================================================== 00:38:55.734 [2024-12-09T23:20:40.207Z] Total : 13446.98 52.53 0.00 0.00 9497.25 3237.48 13736.35 00:38:55.734 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 641592 00:38:55.734 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 641594 00:38:55.734 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 641597 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:55.994 rmmod nvme_tcp 00:38:55.994 rmmod nvme_fabrics 00:38:55.994 rmmod nvme_keyring 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 641309 ']' 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 641309 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 641309 ']' 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 641309 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 641309 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 641309' 00:38:55.994 killing process with pid 641309 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 641309 00:38:55.994 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 641309 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.253 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.175 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.175 00:38:58.175 real 0m12.966s 00:38:58.175 user 0m15.690s 00:38:58.175 sys 0m8.115s 00:38:58.175 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.175 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:58.175 ************************************ 00:38:58.175 END TEST nvmf_bdev_io_wait 00:38:58.175 ************************************ 00:38:58.436 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:58.436 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.436 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.436 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.436 ************************************ 00:38:58.436 START TEST nvmf_queue_depth 00:38:58.436 ************************************ 00:38:58.436 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:58.436 * Looking for test storage... 00:38:58.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:58.437 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:58.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.697 --rc genhtml_branch_coverage=1 00:38:58.697 --rc genhtml_function_coverage=1 00:38:58.697 --rc genhtml_legend=1 00:38:58.697 --rc geninfo_all_blocks=1 00:38:58.697 --rc geninfo_unexecuted_blocks=1 00:38:58.697 00:38:58.697 ' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:58.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.697 --rc genhtml_branch_coverage=1 00:38:58.697 --rc genhtml_function_coverage=1 00:38:58.697 --rc genhtml_legend=1 00:38:58.697 --rc geninfo_all_blocks=1 00:38:58.697 --rc geninfo_unexecuted_blocks=1 00:38:58.697 00:38:58.697 ' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:58.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.697 --rc genhtml_branch_coverage=1 00:38:58.697 --rc genhtml_function_coverage=1 00:38:58.697 --rc genhtml_legend=1 00:38:58.697 --rc geninfo_all_blocks=1 00:38:58.697 --rc geninfo_unexecuted_blocks=1 00:38:58.697 00:38:58.697 ' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:58.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.697 --rc genhtml_branch_coverage=1 00:38:58.697 --rc genhtml_function_coverage=1 00:38:58.697 --rc genhtml_legend=1 00:38:58.697 --rc geninfo_all_blocks=1 00:38:58.697 --rc geninfo_unexecuted_blocks=1 00:38:58.697 00:38:58.697 ' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.697 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.698 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.840 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:06.841 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:06.841 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:06.841 Found net devices under 0000:af:00.0: cvl_0_0 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:06.841 Found net devices under 0000:af:00.1: cvl_0_1 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.841 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.842 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:39:06.842 00:39:06.842 --- 10.0.0.2 ping statistics --- 00:39:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.842 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:39:06.842 00:39:06.842 --- 10.0.0.1 ping statistics --- 00:39:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.842 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=645564 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 645564 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 645564 ']' 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.842 00:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.842 [2024-12-10 00:20:50.246118] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:06.842 [2024-12-10 00:20:50.247080] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:39:06.842 [2024-12-10 00:20:50.247114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.842 [2024-12-10 00:20:50.331244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.842 [2024-12-10 00:20:50.371166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.842 [2024-12-10 00:20:50.371211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.842 [2024-12-10 00:20:50.371220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.842 [2024-12-10 00:20:50.371229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.842 [2024-12-10 00:20:50.371236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.842 [2024-12-10 00:20:50.371812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.842 [2024-12-10 00:20:50.440770] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:06.842 [2024-12-10 00:20:50.440993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.842 [2024-12-10 00:20:51.128544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.842 Malloc0 00:39:06.842 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.843 [2024-12-10 00:20:51.208776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=645838 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 645838 /var/tmp/bdevperf.sock 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 645838 ']' 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:06.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.843 00:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:06.843 [2024-12-10 00:20:51.263687] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:39:06.843 [2024-12-10 00:20:51.263741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645838 ] 00:39:07.102 [2024-12-10 00:20:51.353562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.102 [2024-12-10 00:20:51.393587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.671 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:07.671 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:07.671 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:07.671 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.671 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:07.930 NVMe0n1 00:39:07.930 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.930 00:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:07.930 Running I/O for 10 seconds... 00:39:09.810 12278.00 IOPS, 47.96 MiB/s [2024-12-09T23:20:55.673Z] 12293.00 IOPS, 48.02 MiB/s [2024-12-09T23:20:56.610Z] 12463.67 IOPS, 48.69 MiB/s [2024-12-09T23:20:57.547Z] 12546.75 IOPS, 49.01 MiB/s [2024-12-09T23:20:58.485Z] 12661.20 IOPS, 49.46 MiB/s [2024-12-09T23:20:59.423Z] 12685.83 IOPS, 49.55 MiB/s [2024-12-09T23:21:00.364Z] 12729.00 IOPS, 49.72 MiB/s [2024-12-09T23:21:01.303Z] 12785.50 IOPS, 49.94 MiB/s [2024-12-09T23:21:02.417Z] 12753.78 IOPS, 49.82 MiB/s [2024-12-09T23:21:02.417Z] 12789.50 IOPS, 49.96 MiB/s 00:39:17.944 Latency(us) 00:39:17.944 [2024-12-09T23:21:02.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.944 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:17.944 Verification LBA range: start 0x0 length 0x4000 00:39:17.944 NVMe0n1 : 10.06 12803.00 50.01 0.00 0.00 79717.98 18979.23 53267.66 00:39:17.944 [2024-12-09T23:21:02.417Z] =================================================================================================================== 00:39:17.944 [2024-12-09T23:21:02.417Z] Total : 12803.00 50.01 0.00 0.00 79717.98 18979.23 53267.66 00:39:17.944 { 00:39:17.944 "results": [ 00:39:17.944 { 00:39:17.944 "job": "NVMe0n1", 00:39:17.944 "core_mask": "0x1", 00:39:17.944 "workload": "verify", 00:39:17.944 "status": "finished", 00:39:17.944 "verify_range": { 00:39:17.944 "start": 0, 00:39:17.944 "length": 16384 00:39:17.944 }, 00:39:17.944 "queue_depth": 1024, 00:39:17.944 "io_size": 4096, 00:39:17.944 "runtime": 10.064128, 00:39:17.944 "iops": 12802.996941215373, 00:39:17.944 "mibps": 50.01170680162255, 00:39:17.944 "io_failed": 0, 00:39:17.944 "io_timeout": 0, 00:39:17.944 "avg_latency_us": 79717.9849069204, 00:39:17.944 "min_latency_us": 18979.2256, 00:39:17.944 "max_latency_us": 53267.6608 00:39:17.944 } 00:39:17.944 ], 00:39:17.944 "core_count": 1 00:39:17.944 } 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 645838 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 645838 ']' 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 645838 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:17.944 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645838 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645838' 00:39:18.226 killing process with pid 645838 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 645838 00:39:18.226 Received shutdown signal, test time was about 10.000000 seconds 00:39:18.226 00:39:18.226 Latency(us) 00:39:18.226 [2024-12-09T23:21:02.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.226 [2024-12-09T23:21:02.699Z] =================================================================================================================== 00:39:18.226 [2024-12-09T23:21:02.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 645838 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:18.226 rmmod nvme_tcp 00:39:18.226 rmmod nvme_fabrics 00:39:18.226 rmmod nvme_keyring 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 645564 ']' 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 645564 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 645564 ']' 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 645564 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:18.226 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645564 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645564' 00:39:18.485 killing process with pid 645564 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 645564 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 645564 00:39:18.485 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.486 00:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.024 00:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.024 00:39:21.024 real 0m22.278s 00:39:21.024 user 0m24.072s 00:39:21.024 sys 0m7.936s 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:21.024 ************************************ 00:39:21.024 END TEST nvmf_queue_depth 00:39:21.024 ************************************ 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:21.024 ************************************ 00:39:21.024 START TEST nvmf_target_multipath 00:39:21.024 ************************************ 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:21.024 * Looking for test storage... 00:39:21.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:21.024 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.025 --rc genhtml_branch_coverage=1 00:39:21.025 --rc genhtml_function_coverage=1 00:39:21.025 --rc genhtml_legend=1 00:39:21.025 --rc geninfo_all_blocks=1 00:39:21.025 --rc geninfo_unexecuted_blocks=1 00:39:21.025 00:39:21.025 ' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.025 --rc genhtml_branch_coverage=1 00:39:21.025 --rc genhtml_function_coverage=1 00:39:21.025 --rc genhtml_legend=1 00:39:21.025 --rc geninfo_all_blocks=1 00:39:21.025 --rc geninfo_unexecuted_blocks=1 00:39:21.025 00:39:21.025 ' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.025 --rc genhtml_branch_coverage=1 00:39:21.025 --rc genhtml_function_coverage=1 00:39:21.025 --rc genhtml_legend=1 00:39:21.025 --rc geninfo_all_blocks=1 00:39:21.025 --rc geninfo_unexecuted_blocks=1 00:39:21.025 00:39:21.025 ' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.025 --rc genhtml_branch_coverage=1 00:39:21.025 --rc genhtml_function_coverage=1 00:39:21.025 --rc genhtml_legend=1 00:39:21.025 --rc geninfo_all_blocks=1 00:39:21.025 --rc geninfo_unexecuted_blocks=1 00:39:21.025 00:39:21.025 ' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.025 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:21.026 00:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:29.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:29.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.152 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:29.153 Found net devices under 0000:af:00.0: cvl_0_0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:29.153 Found net devices under 0000:af:00.1: cvl_0_1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:39:29.153 00:39:29.153 --- 10.0.0.2 ping statistics --- 00:39:29.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.153 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:39:29.153 00:39:29.153 --- 10.0.0.1 ping statistics --- 00:39:29.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.153 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:29.153 only one NIC for nvmf test 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:29.153 rmmod nvme_tcp 00:39:29.153 rmmod nvme_fabrics 00:39:29.153 rmmod nvme_keyring 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.153 00:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.539 00:39:30.539 real 0m9.711s 00:39:30.539 user 0m2.103s 00:39:30.539 sys 0m5.683s 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:30.539 ************************************ 00:39:30.539 END TEST nvmf_target_multipath 00:39:30.539 ************************************ 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:30.539 ************************************ 00:39:30.539 START TEST nvmf_zcopy 00:39:30.539 ************************************ 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:30.539 * Looking for test storage... 00:39:30.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:30.539 00:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.803 --rc genhtml_branch_coverage=1 00:39:30.803 --rc genhtml_function_coverage=1 00:39:30.803 --rc genhtml_legend=1 00:39:30.803 --rc geninfo_all_blocks=1 00:39:30.803 --rc geninfo_unexecuted_blocks=1 00:39:30.803 00:39:30.803 ' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.803 --rc genhtml_branch_coverage=1 00:39:30.803 --rc genhtml_function_coverage=1 00:39:30.803 --rc genhtml_legend=1 00:39:30.803 --rc geninfo_all_blocks=1 00:39:30.803 --rc geninfo_unexecuted_blocks=1 00:39:30.803 00:39:30.803 ' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.803 --rc genhtml_branch_coverage=1 00:39:30.803 --rc genhtml_function_coverage=1 00:39:30.803 --rc genhtml_legend=1 00:39:30.803 --rc geninfo_all_blocks=1 00:39:30.803 --rc geninfo_unexecuted_blocks=1 00:39:30.803 00:39:30.803 ' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.803 --rc genhtml_branch_coverage=1 00:39:30.803 --rc genhtml_function_coverage=1 00:39:30.803 --rc genhtml_legend=1 00:39:30.803 --rc geninfo_all_blocks=1 00:39:30.803 --rc geninfo_unexecuted_blocks=1 00:39:30.803 00:39:30.803 ' 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:30.803 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:30.804 00:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:38.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:38.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.924 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:38.925 Found net devices under 0000:af:00.0: cvl_0_0 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:38.925 Found net devices under 0000:af:00.1: cvl_0_1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:39:38.925 00:39:38.925 --- 10.0.0.2 ping statistics --- 00:39:38.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.925 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:39:38.925 00:39:38.925 --- 10.0.0.1 ping statistics --- 00:39:38.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.925 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=655584 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 655584 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 655584 ']' 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.925 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.925 [2024-12-10 00:21:22.462803] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:38.925 [2024-12-10 00:21:22.463799] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:39:38.925 [2024-12-10 00:21:22.463843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.925 [2024-12-10 00:21:22.559736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.925 [2024-12-10 00:21:22.602129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.925 [2024-12-10 00:21:22.602163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.925 [2024-12-10 00:21:22.602175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.925 [2024-12-10 00:21:22.602184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.925 [2024-12-10 00:21:22.602191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.925 [2024-12-10 00:21:22.602737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.925 [2024-12-10 00:21:22.669788] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.925 [2024-12-10 00:21:22.670014] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.925 [2024-12-10 00:21:23.359473] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.925 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:38.926 [2024-12-10 00:21:23.387769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.926 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:39.186 malloc0 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:39.186 { 00:39:39.186 "params": { 00:39:39.186 "name": "Nvme$subsystem", 00:39:39.186 "trtype": "$TEST_TRANSPORT", 00:39:39.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:39.186 "adrfam": "ipv4", 00:39:39.186 "trsvcid": "$NVMF_PORT", 00:39:39.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:39.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:39.186 "hdgst": ${hdgst:-false}, 00:39:39.186 "ddgst": ${ddgst:-false} 00:39:39.186 }, 00:39:39.186 "method": "bdev_nvme_attach_controller" 00:39:39.186 } 00:39:39.186 EOF 00:39:39.186 )") 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:39.186 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:39.186 "params": { 00:39:39.186 "name": "Nvme1", 00:39:39.186 "trtype": "tcp", 00:39:39.186 "traddr": "10.0.0.2", 00:39:39.186 "adrfam": "ipv4", 00:39:39.186 "trsvcid": "4420", 00:39:39.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:39.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:39.186 "hdgst": false, 00:39:39.186 "ddgst": false 00:39:39.186 }, 00:39:39.186 "method": "bdev_nvme_attach_controller" 00:39:39.186 }' 00:39:39.186 [2024-12-10 00:21:23.493443] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:39:39.186 [2024-12-10 00:21:23.493501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655643 ] 00:39:39.186 [2024-12-10 00:21:23.584362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.186 [2024-12-10 00:21:23.624836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.453 Running I/O for 10 seconds... 00:39:41.771 8596.00 IOPS, 67.16 MiB/s [2024-12-09T23:21:27.181Z] 8646.00 IOPS, 67.55 MiB/s [2024-12-09T23:21:28.116Z] 8655.67 IOPS, 67.62 MiB/s [2024-12-09T23:21:29.057Z] 8674.00 IOPS, 67.77 MiB/s [2024-12-09T23:21:29.996Z] 8675.60 IOPS, 67.78 MiB/s [2024-12-09T23:21:30.939Z] 8685.83 IOPS, 67.86 MiB/s [2024-12-09T23:21:31.884Z] 8667.14 IOPS, 67.71 MiB/s [2024-12-09T23:21:33.261Z] 8667.50 IOPS, 67.71 MiB/s [2024-12-09T23:21:34.207Z] 8675.33 IOPS, 67.78 MiB/s [2024-12-09T23:21:34.207Z] 8678.20 IOPS, 67.80 MiB/s 00:39:49.734 Latency(us) 00:39:49.734 [2024-12-09T23:21:34.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.734 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:49.734 Verification LBA range: start 0x0 length 0x1000 00:39:49.734 Nvme1n1 : 10.05 8645.68 67.54 0.00 0.00 14708.46 2084.04 43620.76 00:39:49.734 [2024-12-09T23:21:34.207Z] =================================================================================================================== 00:39:49.734 [2024-12-09T23:21:34.207Z] Total : 8645.68 67.54 0.00 0.00 14708.46 2084.04 43620.76 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=657466 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:49.734 { 00:39:49.734 "params": { 00:39:49.734 "name": "Nvme$subsystem", 00:39:49.734 "trtype": "$TEST_TRANSPORT", 00:39:49.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:49.734 "adrfam": "ipv4", 00:39:49.734 "trsvcid": "$NVMF_PORT", 00:39:49.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:49.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:49.734 "hdgst": ${hdgst:-false}, 00:39:49.734 "ddgst": ${ddgst:-false} 00:39:49.734 }, 00:39:49.734 "method": "bdev_nvme_attach_controller" 00:39:49.734 } 00:39:49.734 EOF 00:39:49.734 )") 00:39:49.734 [2024-12-10 00:21:34.059114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.734 [2024-12-10 00:21:34.059147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:49.734 [2024-12-10 00:21:34.071080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.734 [2024-12-10 00:21:34.071095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:49.734 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:49.734 "params": { 00:39:49.734 "name": "Nvme1", 00:39:49.734 "trtype": "tcp", 00:39:49.734 "traddr": "10.0.0.2", 00:39:49.734 "adrfam": "ipv4", 00:39:49.734 "trsvcid": "4420", 00:39:49.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:49.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:49.734 "hdgst": false, 00:39:49.734 "ddgst": false 00:39:49.734 }, 00:39:49.734 "method": "bdev_nvme_attach_controller" 00:39:49.734 }' 00:39:49.734 [2024-12-10 00:21:34.083080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.734 [2024-12-10 00:21:34.083093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.734 [2024-12-10 00:21:34.095080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.734 [2024-12-10 00:21:34.095092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.734 [2024-12-10 00:21:34.100668] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:39:49.734 [2024-12-10 00:21:34.100714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657466 ] 00:39:49.734 [2024-12-10 00:21:34.107081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.734 [2024-12-10 00:21:34.107093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.734 [2024-12-10 00:21:34.119080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.119093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.131081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.131093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.143081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.143093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.155080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.155094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.167078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.167092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.179081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.179104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.188986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.735 [2024-12-10 00:21:34.191078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.191090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.735 [2024-12-10 00:21:34.203084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.735 [2024-12-10 00:21:34.203100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.215079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.215092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.227086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.227104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.229014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.994 [2024-12-10 00:21:34.239085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.239100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.251090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.251109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.263084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.263100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.275095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.275109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.287082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.287095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.299079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.299093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.311088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.311106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.323087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.323105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.335084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.335100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.347088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.347104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.359085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.359102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.371088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.371108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 Running I/O for 5 seconds... 00:39:49.994 [2024-12-10 00:21:34.383150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.383169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.994 [2024-12-10 00:21:34.398630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.994 [2024-12-10 00:21:34.398651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.995 [2024-12-10 00:21:34.412857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.995 [2024-12-10 00:21:34.412882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.995 [2024-12-10 00:21:34.427472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.995 [2024-12-10 00:21:34.427492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.995 [2024-12-10 00:21:34.442144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.995 [2024-12-10 00:21:34.442164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:49.995 [2024-12-10 00:21:34.456878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:49.995 [2024-12-10 00:21:34.456898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.471423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.471444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.486686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.486707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.502620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.502640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.517111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.517131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.531717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.531737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.547584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.547603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.563133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.563154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.576712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.576731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.591675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.591695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.606785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.606804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.619985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.620009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.635299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.635319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.647504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.647524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.662412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.662433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.676863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.676884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.691071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.691091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.704618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.704638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.254 [2024-12-10 00:21:34.719330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.254 [2024-12-10 00:21:34.719350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.730836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.730857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.745110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.745134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.759634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.759653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.774926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.774947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.788763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.788782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.803371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.803390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.818888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.818908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.833010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.833030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.847627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.847646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.863257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.863277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.873976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.873995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.888540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.888565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.903212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.903232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.915609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.915629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.930694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.930715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.945265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.945286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.959702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.959722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.513 [2024-12-10 00:21:34.975944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.513 [2024-12-10 00:21:34.975965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:34.990717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:34.990739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.004189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.004210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.018915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.018935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.034693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.034714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.048892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.048923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.063565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.063585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.079095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.079116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.092930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.092950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.107345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.107366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.123184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.123204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.134502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.134522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.149313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.149334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.163701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.163726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.178700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.178721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.192045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.192065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.206963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.206984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.222661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.222682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:50.772 [2024-12-10 00:21:35.236922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:50.772 [2024-12-10 00:21:35.236943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.251427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.251447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.267307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.267328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.280633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.280654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.295415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.295435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.311100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.311120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.323747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.323767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.338869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.338889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.352484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.352504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.367708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.367728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 [2024-12-10 00:21:35.382658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.031 [2024-12-10 00:21:35.382679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.031 16784.00 IOPS, 131.12 MiB/s [2024-12-09T23:21:35.505Z] [2024-12-10 00:21:35.397111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.397131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.411938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.411959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.426785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.426806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.443202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.443223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.456536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.456561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.471470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.471490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.486887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.486907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.032 [2024-12-10 00:21:35.503300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.032 [2024-12-10 00:21:35.503321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.514880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.514900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.529078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.529098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.543812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.543837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.559036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.559056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.571117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.571137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.584938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.584958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.599456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.599476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.614974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.614996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.628201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.628221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.642861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.642882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.659173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.659193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.672914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.672935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.688384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.688405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.703239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.703260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.713969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.713990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.728752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.728772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.743507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.743526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.291 [2024-12-10 00:21:35.758968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.291 [2024-12-10 00:21:35.758999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.773232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.773253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.788247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.788267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.802936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.802957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.816164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.816183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.831064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.831086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.844770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.844789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.859467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.859486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.874848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.874868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.889165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.889185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.903700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.903720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.918944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.918965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.934985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.935005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.948665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.948685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.963654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.963674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.979209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.979229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:35.991689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:35.991708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:36.006876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:36.006897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.550 [2024-12-10 00:21:36.022974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.550 [2024-12-10 00:21:36.022994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.037119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.037140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.051895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.051915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.066779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.066800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.080576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.080596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.095356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.095376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.111160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.111180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.124666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.124686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.139452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.139472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.154927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.154948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.171152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.171173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.182669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.182688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.196709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.196729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.211665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.211701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.227608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.227628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.243460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.243479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.259446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.259470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:51.810 [2024-12-10 00:21:36.275078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:51.810 [2024-12-10 00:21:36.275102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.288750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.288771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.303617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.303637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.318780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.318801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.333195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.333214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.347921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.347941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.363031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.363052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.376951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.376971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 16805.00 IOPS, 131.29 MiB/s [2024-12-09T23:21:36.542Z] [2024-12-10 00:21:36.391316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.391336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.401757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.401777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.416214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.416234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.430636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.430655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.443847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.443867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.459448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.459468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.475076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.475096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.488970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.488991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.503307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.503328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.514507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.514526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.069 [2024-12-10 00:21:36.528741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.069 [2024-12-10 00:21:36.528765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.543485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.328 [2024-12-10 00:21:36.543505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.559474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.328 [2024-12-10 00:21:36.559494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.574267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.328 [2024-12-10 00:21:36.574287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.589372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.328 [2024-12-10 00:21:36.589394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.604014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.328 [2024-12-10 00:21:36.604036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.328 [2024-12-10 00:21:36.619630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.619651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.635028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.635049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.649047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.649069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.663326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.663347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.674431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.674452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.689031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.689052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.703392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.703413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.719140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.719161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.730167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.730188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.745310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.745330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.759812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.759838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.774466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.774485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.329 [2024-12-10 00:21:36.788480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.329 [2024-12-10 00:21:36.788501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.803387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.803412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.819196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.819217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.832640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.832661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.847216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.847236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.858222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.858242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.872792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.872812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.887237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.887257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.898461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.898483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.913121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.913141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.927604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.927624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.942674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.942693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.956820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.956844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.971451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.971470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:36.987258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:36.987279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:37.000448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:37.000468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:37.016023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:37.016043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:37.031063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:37.031083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:37.045111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:37.045131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.588 [2024-12-10 00:21:37.059608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.588 [2024-12-10 00:21:37.059628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.075294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.075315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.088052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.088073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.102729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.102749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.116792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.116813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.131191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.131211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.143962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.143981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.159180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.159201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.172729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.172750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.187546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.187565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.203466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.203485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.219487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.219506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.235174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.235193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.247420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.247439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.263099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.263119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.277104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.277124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.291669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.291689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.307210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.307231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:52.847 [2024-12-10 00:21:37.320068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:52.847 [2024-12-10 00:21:37.320088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.334966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.334998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.347813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.347837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.363532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.363551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.378712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.378732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 16858.00 IOPS, 131.70 MiB/s [2024-12-09T23:21:37.579Z] [2024-12-10 00:21:37.395279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.395300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.407664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.407684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.422967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.422988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.436885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.436905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.451456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.451476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.466720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.466740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.480864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.480884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.495468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.495489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.510586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.510606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.524830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.524850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.539348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.539367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.554455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.554475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.106 [2024-12-10 00:21:37.569016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.106 [2024-12-10 00:21:37.569035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.583725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.583745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.599094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.599114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.612198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.612222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.627272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.627292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.637572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.637591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.652085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.652105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.667575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.667594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.683742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.683762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.698732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.698752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.712597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.712617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.727692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.727712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.742899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.742919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.759034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.759055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.772653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.772674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.787083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.787103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.799441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.799460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.815340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.815360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.366 [2024-12-10 00:21:37.825493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.366 [2024-12-10 00:21:37.825512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.625 [2024-12-10 00:21:37.840055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.625 [2024-12-10 00:21:37.840076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.625 [2024-12-10 00:21:37.855772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.625 [2024-12-10 00:21:37.855792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.625 [2024-12-10 00:21:37.871031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.871051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.883742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.883766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.898281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.898300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.912994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.913013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.927608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.927627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.943662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.943682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.958702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.958721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.973012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.973033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.987277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.987297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:37.998912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:37.998933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.012683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.012703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.027390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.027410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.043143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.043163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.056645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.056664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.071280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.071299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.082541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.082560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.626 [2024-12-10 00:21:38.097354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.626 [2024-12-10 00:21:38.097374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.111911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.111931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.126805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.126830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.142987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.143007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.156479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.156503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.170872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.170892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.187096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.187117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.201481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.201503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.216116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.216137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.230918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.230939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.244979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.245000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.260302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.260322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.274731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.274751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.288380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.288401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.303511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.303531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.319104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.319126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.332873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.332893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:53.885 [2024-12-10 00:21:38.347642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:53.885 [2024-12-10 00:21:38.347662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.362462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.362485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.376797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.376818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.391677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.391698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 16864.75 IOPS, 131.76 MiB/s [2024-12-09T23:21:38.618Z] [2024-12-10 00:21:38.406327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.406347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.420033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.420053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.435013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.435034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.446467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.446487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.461172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.461192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.475498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.475518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.491297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.491318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.503305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.503325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.519186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.519206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.532219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.532238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.547558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.547577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.562966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.562997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.575916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.575936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.590743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.590763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.145 [2024-12-10 00:21:38.604911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.145 [2024-12-10 00:21:38.604930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.619881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.619902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.635146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.635168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.649100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.649120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.663914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.663935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.679315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.679336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.692466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.692486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.707181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.707201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.720523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.720544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.735139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.735159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.746267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.746287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.761167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.761188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.775522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.775542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.791115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.791136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.805565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.805586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.819876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.819897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.834912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.834932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.851737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.851756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.404 [2024-12-10 00:21:38.866808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.404 [2024-12-10 00:21:38.866833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.881075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.881096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.895927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.895947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.911410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.911429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.927393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.927412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.942565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.942585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.956839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.956859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.971901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.971920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:38.986932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:38.986952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:39.001175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:39.001195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:39.015569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:39.015589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:39.030680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:39.030699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.663 [2024-12-10 00:21:39.044951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.663 [2024-12-10 00:21:39.044972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.060070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.060090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.075264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.075284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.087706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.087726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.103546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.103566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.118989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.119010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.664 [2024-12-10 00:21:39.132063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.664 [2024-12-10 00:21:39.132084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.146924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.146946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.163079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.163100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.176884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.176904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.191461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.191481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.207504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.207525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.223088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.223107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.235619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.235638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.251138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.251158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.262591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.262611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.277338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.277358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.292075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.292095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.306641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.306661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.320890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.320910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.335535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.335554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.350985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.351005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.363604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.363623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.378515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.378534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:54.922 [2024-12-10 00:21:39.392507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:54.922 [2024-12-10 00:21:39.392527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 16851.80 IOPS, 131.65 MiB/s 00:39:55.182 Latency(us) 00:39:55.182 [2024-12-09T23:21:39.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.182 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:55.182 Nvme1n1 : 5.00 16862.45 131.74 0.00 0.00 7584.97 2097.15 12949.91 00:39:55.182 [2024-12-09T23:21:39.655Z] =================================================================================================================== 00:39:55.182 [2024-12-09T23:21:39.655Z] Total : 16862.45 131.74 0.00 0.00 7584.97 2097.15 12949.91 00:39:55.182 [2024-12-10 00:21:39.403088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.403107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.415084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.415101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.427095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.427114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.439091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.439111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.451086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.451100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.463087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.463111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.475084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.475099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.487082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.487096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.499086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.499101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.511081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.511095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.523079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.523091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.535083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.535097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.547080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.547091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 [2024-12-10 00:21:39.559078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:55.182 [2024-12-10 00:21:39.559090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:55.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (657466) - No such process 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 657466 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:55.182 delay0 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.182 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:55.442 [2024-12-10 00:21:39.719546] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:03.555 Initializing NVMe Controllers 00:40:03.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:03.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:03.555 Initialization complete. Launching workers. 00:40:03.555 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 13623 00:40:03.555 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13824, failed to submit 93 00:40:03.555 success 13727, unsuccessful 97, failed 0 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:03.555 rmmod nvme_tcp 00:40:03.555 rmmod nvme_fabrics 00:40:03.555 rmmod nvme_keyring 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 655584 ']' 00:40:03.555 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 655584 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 655584 ']' 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 655584 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 655584 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 655584' 00:40:03.556 killing process with pid 655584 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 655584 00:40:03.556 00:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 655584 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:03.556 00:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:04.934 00:40:04.934 real 0m34.298s 00:40:04.934 user 0m41.452s 00:40:04.934 sys 0m15.526s 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:04.934 ************************************ 00:40:04.934 END TEST nvmf_zcopy 00:40:04.934 ************************************ 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:04.934 ************************************ 00:40:04.934 START TEST nvmf_nmic 00:40:04.934 ************************************ 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:04.934 * Looking for test storage... 00:40:04.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:04.934 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.194 --rc genhtml_branch_coverage=1 00:40:05.194 --rc genhtml_function_coverage=1 00:40:05.194 --rc genhtml_legend=1 00:40:05.194 --rc geninfo_all_blocks=1 00:40:05.194 --rc geninfo_unexecuted_blocks=1 00:40:05.194 00:40:05.194 ' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.194 --rc genhtml_branch_coverage=1 00:40:05.194 --rc genhtml_function_coverage=1 00:40:05.194 --rc genhtml_legend=1 00:40:05.194 --rc geninfo_all_blocks=1 00:40:05.194 --rc geninfo_unexecuted_blocks=1 00:40:05.194 00:40:05.194 ' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.194 --rc genhtml_branch_coverage=1 00:40:05.194 --rc genhtml_function_coverage=1 00:40:05.194 --rc genhtml_legend=1 00:40:05.194 --rc geninfo_all_blocks=1 00:40:05.194 --rc geninfo_unexecuted_blocks=1 00:40:05.194 00:40:05.194 ' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:05.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.194 --rc genhtml_branch_coverage=1 00:40:05.194 --rc genhtml_function_coverage=1 00:40:05.194 --rc genhtml_legend=1 00:40:05.194 --rc geninfo_all_blocks=1 00:40:05.194 --rc geninfo_unexecuted_blocks=1 00:40:05.194 00:40:05.194 ' 00:40:05.194 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:05.195 00:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:13.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:13.320 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.320 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:13.321 Found net devices under 0000:af:00.0: cvl_0_0 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:13.321 Found net devices under 0000:af:00.1: cvl_0_1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:13.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:40:13.321 00:40:13.321 --- 10.0.0.2 ping statistics --- 00:40:13.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.321 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:13.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:40:13.321 00:40:13.321 --- 10.0.0.1 ping statistics --- 00:40:13.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.321 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=663170 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 663170 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 663170 ']' 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:13.321 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.321 [2024-12-10 00:21:56.823236] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:13.321 [2024-12-10 00:21:56.824315] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:40:13.321 [2024-12-10 00:21:56.824357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.321 [2024-12-10 00:21:56.920225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:13.321 [2024-12-10 00:21:56.964095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.321 [2024-12-10 00:21:56.964135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.321 [2024-12-10 00:21:56.964145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.321 [2024-12-10 00:21:56.964154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.321 [2024-12-10 00:21:56.964160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.321 [2024-12-10 00:21:56.965925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.321 [2024-12-10 00:21:56.965963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:13.321 [2024-12-10 00:21:56.966066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.321 [2024-12-10 00:21:56.966067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:13.321 [2024-12-10 00:21:57.036166] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:13.321 [2024-12-10 00:21:57.036255] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:13.321 [2024-12-10 00:21:57.036799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:13.321 [2024-12-10 00:21:57.036892] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:13.321 [2024-12-10 00:21:57.036969] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.321 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.322 [2024-12-10 00:21:57.710964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.322 Malloc0 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.322 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.581 [2024-12-10 00:21:57.803184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:13.581 test case1: single bdev can't be used in multiple subsystems 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.581 [2024-12-10 00:21:57.834680] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:13.581 [2024-12-10 00:21:57.834702] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:13.581 [2024-12-10 00:21:57.834712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:13.581 request: 00:40:13.581 { 00:40:13.581 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:13.581 "namespace": { 00:40:13.581 "bdev_name": "Malloc0", 00:40:13.581 "no_auto_visible": false, 00:40:13.581 "hide_metadata": false 00:40:13.581 }, 00:40:13.581 "method": "nvmf_subsystem_add_ns", 00:40:13.581 "req_id": 1 00:40:13.581 } 00:40:13.581 Got JSON-RPC error response 00:40:13.581 response: 00:40:13.581 { 00:40:13.581 "code": -32602, 00:40:13.581 "message": "Invalid parameters" 00:40:13.581 } 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:13.581 Adding namespace failed - expected result. 00:40:13.581 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:13.581 test case2: host connect to nvmf target in multiple paths 00:40:13.582 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:13.582 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.582 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:13.582 [2024-12-10 00:21:57.850778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:13.582 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.582 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:13.841 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:13.841 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:14.099 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:14.099 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:14.099 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:14.099 00:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:16.005 00:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:16.005 [global] 00:40:16.005 thread=1 00:40:16.005 invalidate=1 00:40:16.005 rw=write 00:40:16.005 time_based=1 00:40:16.005 runtime=1 00:40:16.005 ioengine=libaio 00:40:16.005 direct=1 00:40:16.005 bs=4096 00:40:16.005 iodepth=1 00:40:16.005 norandommap=0 00:40:16.005 numjobs=1 00:40:16.005 00:40:16.005 verify_dump=1 00:40:16.005 verify_backlog=512 00:40:16.005 verify_state_save=0 00:40:16.005 do_verify=1 00:40:16.005 verify=crc32c-intel 00:40:16.005 [job0] 00:40:16.005 filename=/dev/nvme0n1 00:40:16.005 Could not set queue depth (nvme0n1) 00:40:16.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.263 fio-3.35 00:40:16.263 Starting 1 thread 00:40:17.641 00:40:17.641 job0: (groupid=0, jobs=1): err= 0: pid=663950: Tue Dec 10 00:22:01 2024 00:40:17.641 read: IOPS=1508, BW=6033KiB/s (6178kB/s)(6220KiB/1031msec) 00:40:17.641 slat (nsec): min=7022, max=33306, avg=9015.97, stdev=1575.01 00:40:17.641 clat (usec): min=199, max=41531, avg=425.44, stdev=2754.13 00:40:17.641 lat (usec): min=207, max=41541, avg=434.46, stdev=2754.19 00:40:17.641 clat percentiles (usec): 00:40:17.641 | 1.00th=[ 204], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:40:17.641 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 262], 00:40:17.641 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 273], 95.00th=[ 277], 00:40:17.641 | 99.00th=[ 388], 99.50th=[ 482], 99.90th=[41681], 99.95th=[41681], 00:40:17.641 | 99.99th=[41681] 00:40:17.641 write: IOPS=1986, BW=7946KiB/s (8136kB/s)(8192KiB/1031msec); 0 zone resets 00:40:17.641 slat (nsec): min=11260, max=50679, avg=12373.15, stdev=1480.60 00:40:17.641 clat (usec): min=129, max=415, avg=156.92, stdev=37.62 00:40:17.641 lat (usec): min=141, max=466, avg=169.29, stdev=37.82 00:40:17.641 clat percentiles (usec): 00:40:17.641 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:40:17.641 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:40:17.641 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 241], 95.00th=[ 241], 00:40:17.641 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 334], 99.95th=[ 367], 00:40:17.641 | 99.99th=[ 416] 00:40:17.641 bw ( KiB/s): min= 5880, max=10504, per=100.00%, avg=8192.00, stdev=3269.66, samples=2 00:40:17.641 iops : min= 1470, max= 2626, avg=2048.00, stdev=817.42, samples=2 00:40:17.641 lat (usec) : 250=81.68%, 500=18.12% 00:40:17.641 lat (msec) : 50=0.19% 00:40:17.641 cpu : usr=2.14%, sys=4.08%, ctx=3603, majf=0, minf=1 00:40:17.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.641 issued rwts: total=1555,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.641 00:40:17.641 Run status group 0 (all jobs): 00:40:17.641 READ: bw=6033KiB/s (6178kB/s), 6033KiB/s-6033KiB/s (6178kB/s-6178kB/s), io=6220KiB (6369kB), run=1031-1031msec 00:40:17.641 WRITE: bw=7946KiB/s (8136kB/s), 7946KiB/s-7946KiB/s (8136kB/s-8136kB/s), io=8192KiB (8389kB), run=1031-1031msec 00:40:17.641 00:40:17.641 Disk stats (read/write): 00:40:17.641 nvme0n1: ios=1595/2048, merge=0/0, ticks=509/309, in_queue=818, util=91.68% 00:40:17.641 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:17.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:17.641 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:17.641 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:17.641 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:17.642 00:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:17.642 rmmod nvme_tcp 00:40:17.642 rmmod nvme_fabrics 00:40:17.642 rmmod nvme_keyring 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 663170 ']' 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 663170 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 663170 ']' 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 663170 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:17.642 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663170 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663170' 00:40:17.900 killing process with pid 663170 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 663170 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 663170 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.900 00:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:20.477 00:40:20.477 real 0m15.151s 00:40:20.477 user 0m26.966s 00:40:20.477 sys 0m7.975s 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:20.477 ************************************ 00:40:20.477 END TEST nvmf_nmic 00:40:20.477 ************************************ 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:20.477 ************************************ 00:40:20.477 START TEST nvmf_fio_target 00:40:20.477 ************************************ 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:20.477 * Looking for test storage... 00:40:20.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.477 --rc genhtml_branch_coverage=1 00:40:20.477 --rc genhtml_function_coverage=1 00:40:20.477 --rc genhtml_legend=1 00:40:20.477 --rc geninfo_all_blocks=1 00:40:20.477 --rc geninfo_unexecuted_blocks=1 00:40:20.477 00:40:20.477 ' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.477 --rc genhtml_branch_coverage=1 00:40:20.477 --rc genhtml_function_coverage=1 00:40:20.477 --rc genhtml_legend=1 00:40:20.477 --rc geninfo_all_blocks=1 00:40:20.477 --rc geninfo_unexecuted_blocks=1 00:40:20.477 00:40:20.477 ' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.477 --rc genhtml_branch_coverage=1 00:40:20.477 --rc genhtml_function_coverage=1 00:40:20.477 --rc genhtml_legend=1 00:40:20.477 --rc geninfo_all_blocks=1 00:40:20.477 --rc geninfo_unexecuted_blocks=1 00:40:20.477 00:40:20.477 ' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.477 --rc genhtml_branch_coverage=1 00:40:20.477 --rc genhtml_function_coverage=1 00:40:20.477 --rc genhtml_legend=1 00:40:20.477 --rc geninfo_all_blocks=1 00:40:20.477 --rc geninfo_unexecuted_blocks=1 00:40:20.477 00:40:20.477 ' 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:20.477 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:20.478 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.684 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:28.685 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:28.685 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:28.685 Found net devices under 0000:af:00.0: cvl_0_0 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:28.685 Found net devices under 0000:af:00.1: cvl_0_1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.685 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:40:28.686 00:40:28.686 --- 10.0.0.2 ping statistics --- 00:40:28.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.686 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:40:28.686 00:40:28.686 --- 10.0.0.1 ping statistics --- 00:40:28.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.686 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:28.686 00:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=667904 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 667904 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 667904 ']' 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:28.686 [2024-12-10 00:22:12.055388] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:28.686 [2024-12-10 00:22:12.056437] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:40:28.686 [2024-12-10 00:22:12.056482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.686 [2024-12-10 00:22:12.152053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:28.686 [2024-12-10 00:22:12.191997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.686 [2024-12-10 00:22:12.192037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.686 [2024-12-10 00:22:12.192046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.686 [2024-12-10 00:22:12.192055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.686 [2024-12-10 00:22:12.192062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.686 [2024-12-10 00:22:12.193681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.686 [2024-12-10 00:22:12.193794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:28.686 [2024-12-10 00:22:12.193906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:28.686 [2024-12-10 00:22:12.193905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.686 [2024-12-10 00:22:12.262085] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:28.686 [2024-12-10 00:22:12.262336] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:28.686 [2024-12-10 00:22:12.262872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:28.686 [2024-12-10 00:22:12.262993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:28.686 [2024-12-10 00:22:12.263065] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:28.686 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:28.686 [2024-12-10 00:22:13.106807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:28.686 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:28.945 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:28.945 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.205 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:29.205 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.464 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:29.464 00:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.723 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:29.723 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:29.982 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.982 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:29.982 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:30.241 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:30.241 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:30.500 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:30.500 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:30.758 00:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:30.758 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:30.758 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:31.016 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:31.016 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:31.275 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:31.275 [2024-12-10 00:22:15.682712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:31.275 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:31.534 00:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:31.793 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:32.052 00:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:33.959 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:33.959 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:33.959 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:34.218 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:34.218 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:34.218 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:34.218 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:34.218 [global] 00:40:34.218 thread=1 00:40:34.218 invalidate=1 00:40:34.218 rw=write 00:40:34.218 time_based=1 00:40:34.218 runtime=1 00:40:34.218 ioengine=libaio 00:40:34.218 direct=1 00:40:34.218 bs=4096 00:40:34.218 iodepth=1 00:40:34.218 norandommap=0 00:40:34.218 numjobs=1 00:40:34.218 00:40:34.218 verify_dump=1 00:40:34.218 verify_backlog=512 00:40:34.218 verify_state_save=0 00:40:34.218 do_verify=1 00:40:34.218 verify=crc32c-intel 00:40:34.218 [job0] 00:40:34.218 filename=/dev/nvme0n1 00:40:34.218 [job1] 00:40:34.218 filename=/dev/nvme0n2 00:40:34.218 [job2] 00:40:34.218 filename=/dev/nvme0n3 00:40:34.218 [job3] 00:40:34.218 filename=/dev/nvme0n4 00:40:34.218 Could not set queue depth (nvme0n1) 00:40:34.218 Could not set queue depth (nvme0n2) 00:40:34.218 Could not set queue depth (nvme0n3) 00:40:34.218 Could not set queue depth (nvme0n4) 00:40:34.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:34.477 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:34.477 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:34.477 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:34.477 fio-3.35 00:40:34.477 Starting 4 threads 00:40:35.857 00:40:35.857 job0: (groupid=0, jobs=1): err= 0: pid=669168: Tue Dec 10 00:22:20 2024 00:40:35.857 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:40:35.857 slat (nsec): min=11387, max=25694, avg=17598.77, stdev=4678.51 00:40:35.857 clat (usec): min=40857, max=41801, avg=41039.60, stdev=199.93 00:40:35.857 lat (usec): min=40878, max=41827, avg=41057.20, stdev=201.62 00:40:35.857 clat percentiles (usec): 00:40:35.857 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:35.857 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:35.857 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:35.857 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:35.857 | 99.99th=[41681] 00:40:35.857 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:40:35.857 slat (nsec): min=13112, max=71743, avg=15312.58, stdev=3289.42 00:40:35.857 clat (usec): min=152, max=425, avg=205.51, stdev=25.39 00:40:35.857 lat (usec): min=166, max=497, avg=220.83, stdev=26.41 00:40:35.857 clat percentiles (usec): 00:40:35.857 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 186], 00:40:35.857 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:40:35.857 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 247], 00:40:35.857 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 424], 99.95th=[ 424], 00:40:35.857 | 99.99th=[ 424] 00:40:35.858 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:40:35.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:35.858 lat (usec) : 250=91.76%, 500=4.12% 00:40:35.858 lat (msec) : 50=4.12% 00:40:35.858 cpu : usr=0.69%, sys=0.49%, ctx=538, majf=0, minf=2 00:40:35.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:35.858 job1: (groupid=0, jobs=1): err= 0: pid=669169: Tue Dec 10 00:22:20 2024 00:40:35.858 read: IOPS=2062, BW=8249KiB/s (8446kB/s)(8364KiB/1014msec) 00:40:35.858 slat (nsec): min=8368, max=44636, avg=9228.25, stdev=2039.21 00:40:35.858 clat (usec): min=171, max=41331, avg=274.59, stdev=1786.60 00:40:35.858 lat (usec): min=188, max=41344, avg=283.81, stdev=1786.89 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:40:35.858 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 194], 00:40:35.858 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 225], 00:40:35.858 | 99.00th=[ 251], 99.50th=[ 281], 99.90th=[41157], 99.95th=[41157], 00:40:35.858 | 99.99th=[41157] 00:40:35.858 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec); 0 zone resets 00:40:35.858 slat (nsec): min=11668, max=46801, avg=12624.39, stdev=1751.14 00:40:35.858 clat (usec): min=115, max=298, avg=146.66, stdev=19.58 00:40:35.858 lat (usec): min=139, max=345, avg=159.29, stdev=19.89 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[ 133], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:40:35.858 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:40:35.858 | 70.00th=[ 143], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 184], 00:40:35.858 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 277], 00:40:35.858 | 99.99th=[ 297] 00:40:35.858 bw ( KiB/s): min= 8192, max=12288, per=63.88%, avg=10240.00, stdev=2896.31, samples=2 00:40:35.858 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:40:35.858 lat (usec) : 250=99.42%, 500=0.47%, 1000=0.02% 00:40:35.858 lat (msec) : 50=0.09% 00:40:35.858 cpu : usr=4.54%, sys=7.40%, ctx=4651, majf=0, minf=2 00:40:35.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:35.858 job2: (groupid=0, jobs=1): err= 0: pid=669170: Tue Dec 10 00:22:20 2024 00:40:35.858 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:40:35.858 slat (nsec): min=11372, max=27775, avg=24278.36, stdev=3139.74 00:40:35.858 clat (usec): min=40781, max=41192, avg=40960.62, stdev=84.07 00:40:35.858 lat (usec): min=40806, max=41216, avg=40984.90, stdev=83.73 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:40:35.858 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:35.858 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:35.858 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:35.858 | 99.99th=[41157] 00:40:35.858 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:40:35.858 slat (nsec): min=12129, max=45406, avg=13178.21, stdev=2085.47 00:40:35.858 clat (usec): min=164, max=398, avg=218.43, stdev=32.91 00:40:35.858 lat (usec): min=177, max=411, avg=231.61, stdev=33.08 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:40:35.858 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:40:35.858 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 262], 95.00th=[ 277], 00:40:35.858 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 400], 99.95th=[ 400], 00:40:35.858 | 99.99th=[ 400] 00:40:35.858 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:40:35.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:35.858 lat (usec) : 250=82.58%, 500=13.30% 00:40:35.858 lat (msec) : 50=4.12% 00:40:35.858 cpu : usr=0.98%, sys=0.39%, ctx=534, majf=0, minf=2 00:40:35.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:35.858 job3: (groupid=0, jobs=1): err= 0: pid=669171: Tue Dec 10 00:22:20 2024 00:40:35.858 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:40:35.858 slat (nsec): min=11426, max=27342, avg=23838.36, stdev=3987.17 00:40:35.858 clat (usec): min=40793, max=43971, avg=41094.73, stdev=645.64 00:40:35.858 lat (usec): min=40804, max=43987, avg=41118.57, stdev=644.05 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:35.858 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:35.858 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:35.858 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:40:35.858 | 99.99th=[43779] 00:40:35.858 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:40:35.858 slat (nsec): min=12074, max=47195, avg=13171.29, stdev=2048.95 00:40:35.858 clat (usec): min=154, max=271, avg=175.74, stdev=12.27 00:40:35.858 lat (usec): min=167, max=318, avg=188.91, stdev=12.93 00:40:35.858 clat percentiles (usec): 00:40:35.858 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:40:35.858 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:40:35.858 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:40:35.858 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 273], 99.95th=[ 273], 00:40:35.858 | 99.99th=[ 273] 00:40:35.858 bw ( KiB/s): min= 4096, max= 4096, per=25.55%, avg=4096.00, stdev= 0.00, samples=1 00:40:35.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:35.858 lat (usec) : 250=95.51%, 500=0.37% 00:40:35.858 lat (msec) : 50=4.12% 00:40:35.858 cpu : usr=0.50%, sys=1.00%, ctx=534, majf=0, minf=1 00:40:35.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:35.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.858 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:35.858 00:40:35.858 Run status group 0 (all jobs): 00:40:35.858 READ: bw=8442KiB/s (8645kB/s), 86.1KiB/s-8249KiB/s (88.2kB/s-8446kB/s), io=8628KiB (8835kB), run=1003-1022msec 00:40:35.858 WRITE: bw=15.7MiB/s (16.4MB/s), 2004KiB/s-9.86MiB/s (2052kB/s-10.3MB/s), io=16.0MiB (16.8MB), run=1003-1022msec 00:40:35.858 00:40:35.858 Disk stats (read/write): 00:40:35.858 nvme0n1: ios=43/512, merge=0/0, ticks=1682/101, in_queue=1783, util=99.20% 00:40:35.858 nvme0n2: ios=2063/2441, merge=0/0, ticks=465/326, in_queue=791, util=87.99% 00:40:35.858 nvme0n3: ios=17/512, merge=0/0, ticks=697/109, in_queue=806, util=88.13% 00:40:35.858 nvme0n4: ios=17/512, merge=0/0, ticks=700/80, in_queue=780, util=89.48% 00:40:35.858 00:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:35.858 [global] 00:40:35.858 thread=1 00:40:35.858 invalidate=1 00:40:35.858 rw=randwrite 00:40:35.858 time_based=1 00:40:35.858 runtime=1 00:40:35.858 ioengine=libaio 00:40:35.858 direct=1 00:40:35.858 bs=4096 00:40:35.858 iodepth=1 00:40:35.858 norandommap=0 00:40:35.858 numjobs=1 00:40:35.858 00:40:35.858 verify_dump=1 00:40:35.858 verify_backlog=512 00:40:35.858 verify_state_save=0 00:40:35.858 do_verify=1 00:40:35.858 verify=crc32c-intel 00:40:35.858 [job0] 00:40:35.858 filename=/dev/nvme0n1 00:40:35.858 [job1] 00:40:35.858 filename=/dev/nvme0n2 00:40:35.858 [job2] 00:40:35.858 filename=/dev/nvme0n3 00:40:35.858 [job3] 00:40:35.858 filename=/dev/nvme0n4 00:40:35.858 Could not set queue depth (nvme0n1) 00:40:35.858 Could not set queue depth (nvme0n2) 00:40:35.858 Could not set queue depth (nvme0n3) 00:40:35.858 Could not set queue depth (nvme0n4) 00:40:36.117 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:36.117 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:36.117 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:36.117 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:36.117 fio-3.35 00:40:36.117 Starting 4 threads 00:40:37.497 00:40:37.497 job0: (groupid=0, jobs=1): err= 0: pid=669592: Tue Dec 10 00:22:21 2024 00:40:37.497 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:37.497 slat (nsec): min=8407, max=36649, avg=9216.04, stdev=1394.42 00:40:37.497 clat (usec): min=218, max=443, avg=266.26, stdev=28.01 00:40:37.497 lat (usec): min=227, max=452, avg=275.48, stdev=28.06 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 247], 00:40:37.497 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:40:37.497 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 322], 00:40:37.497 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 441], 99.95th=[ 441], 00:40:37.497 | 99.99th=[ 445] 00:40:37.497 write: IOPS=2401, BW=9606KiB/s (9837kB/s)(9616KiB/1001msec); 0 zone resets 00:40:37.497 slat (nsec): min=11236, max=47872, avg=12588.29, stdev=1919.31 00:40:37.497 clat (usec): min=126, max=309, avg=164.11, stdev=11.20 00:40:37.497 lat (usec): min=150, max=321, avg=176.70, stdev=11.43 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:40:37.497 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:40:37.497 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:40:37.497 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 262], 99.95th=[ 297], 00:40:37.497 | 99.99th=[ 310] 00:40:37.497 bw ( KiB/s): min= 9008, max= 9008, per=38.81%, avg=9008.00, stdev= 0.00, samples=1 00:40:37.497 iops : min= 2252, max= 2252, avg=2252.00, stdev= 0.00, samples=1 00:40:37.497 lat (usec) : 250=67.30%, 500=32.70% 00:40:37.497 cpu : usr=3.70%, sys=4.50%, ctx=4453, majf=0, minf=1 00:40:37.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 issued rwts: total=2048,2404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.497 job1: (groupid=0, jobs=1): err= 0: pid=669593: Tue Dec 10 00:22:21 2024 00:40:37.497 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:40:37.497 slat (nsec): min=11681, max=28272, avg=22983.18, stdev=4170.40 00:40:37.497 clat (usec): min=40793, max=41902, avg=41022.25, stdev=216.92 00:40:37.497 lat (usec): min=40816, max=41925, avg=41045.23, stdev=216.58 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:37.497 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:37.497 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:37.497 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:37.497 | 99.99th=[41681] 00:40:37.497 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:40:37.497 slat (nsec): min=11114, max=50334, avg=12423.09, stdev=2463.34 00:40:37.497 clat (usec): min=150, max=381, avg=179.83, stdev=19.25 00:40:37.497 lat (usec): min=162, max=431, avg=192.26, stdev=20.19 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 167], 00:40:37.497 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:40:37.497 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 206], 00:40:37.497 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 383], 99.95th=[ 383], 00:40:37.497 | 99.99th=[ 383] 00:40:37.497 bw ( KiB/s): min= 4096, max= 4096, per=17.65%, avg=4096.00, stdev= 0.00, samples=1 00:40:37.497 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:37.497 lat (usec) : 250=94.94%, 500=0.94% 00:40:37.497 lat (msec) : 50=4.12% 00:40:37.497 cpu : usr=0.20%, sys=0.80%, ctx=534, majf=0, minf=2 00:40:37.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.497 job2: (groupid=0, jobs=1): err= 0: pid=669594: Tue Dec 10 00:22:21 2024 00:40:37.497 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:37.497 slat (nsec): min=9098, max=45519, avg=9995.28, stdev=1293.60 00:40:37.497 clat (usec): min=204, max=676, avg=242.44, stdev=21.49 00:40:37.497 lat (usec): min=216, max=686, avg=252.43, stdev=21.45 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 233], 00:40:37.497 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:40:37.497 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:40:37.497 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 523], 99.95th=[ 619], 00:40:37.497 | 99.99th=[ 676] 00:40:37.497 write: IOPS=2401, BW=9606KiB/s (9837kB/s)(9616KiB/1001msec); 0 zone resets 00:40:37.497 slat (nsec): min=10239, max=57096, avg=13911.89, stdev=1810.31 00:40:37.497 clat (usec): min=143, max=397, avg=181.27, stdev=21.95 00:40:37.497 lat (usec): min=156, max=412, avg=195.18, stdev=22.08 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:40:37.497 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:40:37.497 | 70.00th=[ 190], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 219], 00:40:37.497 | 99.00th=[ 235], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 330], 00:40:37.497 | 99.99th=[ 400] 00:40:37.497 bw ( KiB/s): min= 9592, max= 9592, per=41.32%, avg=9592.00, stdev= 0.00, samples=1 00:40:37.497 iops : min= 2398, max= 2398, avg=2398.00, stdev= 0.00, samples=1 00:40:37.497 lat (usec) : 250=93.24%, 500=6.67%, 750=0.09% 00:40:37.497 cpu : usr=4.50%, sys=8.20%, ctx=4455, majf=0, minf=1 00:40:37.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 issued rwts: total=2048,2404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.497 job3: (groupid=0, jobs=1): err= 0: pid=669597: Tue Dec 10 00:22:21 2024 00:40:37.497 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:40:37.497 slat (nsec): min=11060, max=12486, avg=11678.41, stdev=398.23 00:40:37.497 clat (usec): min=40867, max=41900, avg=41081.81, stdev=244.85 00:40:37.497 lat (usec): min=40879, max=41912, avg=41093.49, stdev=244.83 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:37.497 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:37.497 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:40:37.497 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:37.497 | 99.99th=[41681] 00:40:37.497 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:40:37.497 slat (nsec): min=11664, max=55719, avg=12732.64, stdev=2267.97 00:40:37.497 clat (usec): min=156, max=277, avg=181.37, stdev=15.31 00:40:37.497 lat (usec): min=169, max=333, avg=194.10, stdev=16.04 00:40:37.497 clat percentiles (usec): 00:40:37.497 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:40:37.497 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:40:37.497 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:40:37.497 | 99.00th=[ 227], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 277], 00:40:37.497 | 99.99th=[ 277] 00:40:37.497 bw ( KiB/s): min= 4096, max= 4096, per=17.65%, avg=4096.00, stdev= 0.00, samples=1 00:40:37.497 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:37.497 lat (usec) : 250=95.32%, 500=0.56% 00:40:37.497 lat (msec) : 50=4.12% 00:40:37.497 cpu : usr=0.20%, sys=0.80%, ctx=535, majf=0, minf=1 00:40:37.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.497 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.497 00:40:37.497 Run status group 0 (all jobs): 00:40:37.497 READ: bw=16.1MiB/s (16.9MB/s), 87.6KiB/s-8184KiB/s (89.7kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1005msec 00:40:37.497 WRITE: bw=22.7MiB/s (23.8MB/s), 2038KiB/s-9606KiB/s (2087kB/s-9837kB/s), io=22.8MiB (23.9MB), run=1001-1005msec 00:40:37.497 00:40:37.497 Disk stats (read/write): 00:40:37.497 nvme0n1: ios=1659/2048, merge=0/0, ticks=1414/318, in_queue=1732, util=99.60% 00:40:37.497 nvme0n2: ios=17/512, merge=0/0, ticks=698/91, in_queue=789, util=84.73% 00:40:37.497 nvme0n3: ios=1644/2048, merge=0/0, ticks=1171/355, in_queue=1526, util=97.01% 00:40:37.497 nvme0n4: ios=75/512, merge=0/0, ticks=1592/92, in_queue=1684, util=99.68% 00:40:37.497 00:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:37.497 [global] 00:40:37.497 thread=1 00:40:37.497 invalidate=1 00:40:37.497 rw=write 00:40:37.497 time_based=1 00:40:37.497 runtime=1 00:40:37.497 ioengine=libaio 00:40:37.497 direct=1 00:40:37.497 bs=4096 00:40:37.497 iodepth=128 00:40:37.497 norandommap=0 00:40:37.497 numjobs=1 00:40:37.497 00:40:37.497 verify_dump=1 00:40:37.497 verify_backlog=512 00:40:37.497 verify_state_save=0 00:40:37.497 do_verify=1 00:40:37.497 verify=crc32c-intel 00:40:37.497 [job0] 00:40:37.497 filename=/dev/nvme0n1 00:40:37.497 [job1] 00:40:37.497 filename=/dev/nvme0n2 00:40:37.497 [job2] 00:40:37.497 filename=/dev/nvme0n3 00:40:37.497 [job3] 00:40:37.497 filename=/dev/nvme0n4 00:40:37.498 Could not set queue depth (nvme0n1) 00:40:37.498 Could not set queue depth (nvme0n2) 00:40:37.498 Could not set queue depth (nvme0n3) 00:40:37.498 Could not set queue depth (nvme0n4) 00:40:37.757 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:37.757 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:37.757 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:37.757 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:37.757 fio-3.35 00:40:37.757 Starting 4 threads 00:40:39.136 00:40:39.136 job0: (groupid=0, jobs=1): err= 0: pid=670010: Tue Dec 10 00:22:23 2024 00:40:39.136 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:40:39.136 slat (usec): min=2, max=30412, avg=152.67, stdev=1135.12 00:40:39.136 clat (usec): min=3990, max=88191, avg=16092.31, stdev=13068.81 00:40:39.136 lat (usec): min=4004, max=88194, avg=16244.99, stdev=13192.40 00:40:39.136 clat percentiles (usec): 00:40:39.136 | 1.00th=[ 4359], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9372], 00:40:39.136 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:40:39.136 | 70.00th=[15926], 80.00th=[22414], 90.00th=[33817], 95.00th=[45876], 00:40:39.136 | 99.00th=[70779], 99.50th=[85459], 99.90th=[88605], 99.95th=[88605], 00:40:39.136 | 99.99th=[88605] 00:40:39.136 write: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(12.2MiB/1007msec); 0 zone resets 00:40:39.136 slat (usec): min=3, max=19568, avg=161.69, stdev=880.55 00:40:39.136 clat (usec): min=1888, max=88187, avg=24966.50, stdev=15894.35 00:40:39.136 lat (usec): min=1906, max=88190, avg=25128.19, stdev=15969.11 00:40:39.136 clat percentiles (usec): 00:40:39.136 | 1.00th=[ 4146], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 9896], 00:40:39.136 | 30.00th=[17695], 40.00th=[20055], 50.00th=[23462], 60.00th=[23725], 00:40:39.136 | 70.00th=[25822], 80.00th=[36963], 90.00th=[50070], 95.00th=[53216], 00:40:39.136 | 99.00th=[80217], 99.50th=[81265], 99.90th=[81265], 99.95th=[88605], 00:40:39.136 | 99.99th=[88605] 00:40:39.136 bw ( KiB/s): min=10768, max=13808, per=19.54%, avg=12288.00, stdev=2149.60, samples=2 00:40:39.136 iops : min= 2692, max= 3452, avg=3072.00, stdev=537.40, samples=2 00:40:39.136 lat (msec) : 2=0.03%, 4=0.47%, 10=34.44%, 20=23.98%, 50=34.49% 00:40:39.136 lat (msec) : 100=6.59% 00:40:39.136 cpu : usr=3.58%, sys=3.88%, ctx=326, majf=0, minf=1 00:40:39.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:39.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:39.136 issued rwts: total=3072,3133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:39.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:39.136 job1: (groupid=0, jobs=1): err= 0: pid=670014: Tue Dec 10 00:22:23 2024 00:40:39.136 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:40:39.136 slat (usec): min=2, max=17400, avg=138.52, stdev=990.06 00:40:39.136 clat (usec): min=3837, max=49590, avg=16680.54, stdev=8012.50 00:40:39.136 lat (usec): min=3848, max=49617, avg=16819.05, stdev=8080.71 00:40:39.136 clat percentiles (usec): 00:40:39.136 | 1.00th=[ 4359], 5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9503], 00:40:39.136 | 30.00th=[10028], 40.00th=[11731], 50.00th=[13698], 60.00th=[17433], 00:40:39.136 | 70.00th=[18744], 80.00th=[23462], 90.00th=[30540], 95.00th=[32113], 00:40:39.136 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[48497], 00:40:39.136 | 99.99th=[49546] 00:40:39.136 write: IOPS=2979, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1009msec); 0 zone resets 00:40:39.136 slat (usec): min=2, max=28562, avg=204.11, stdev=1130.15 00:40:39.136 clat (usec): min=260, max=65429, avg=28374.54, stdev=16818.42 00:40:39.136 lat (usec): min=483, max=65435, avg=28578.65, stdev=16925.53 00:40:39.136 clat percentiles (usec): 00:40:39.136 | 1.00th=[ 930], 5.00th=[ 3097], 10.00th=[ 5735], 20.00th=[16581], 00:40:39.136 | 30.00th=[19006], 40.00th=[23462], 50.00th=[23725], 60.00th=[27132], 00:40:39.136 | 70.00th=[34341], 80.00th=[46400], 90.00th=[53216], 95.00th=[58983], 00:40:39.136 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:40:39.136 | 99.99th=[65274] 00:40:39.136 bw ( KiB/s): min=10760, max=12264, per=18.31%, avg=11512.00, stdev=1063.49, samples=2 00:40:39.136 iops : min= 2690, max= 3066, avg=2878.00, stdev=265.87, samples=2 00:40:39.136 lat (usec) : 500=0.11%, 750=0.29%, 1000=0.18% 00:40:39.136 lat (msec) : 2=0.81%, 4=3.32%, 10=16.48%, 20=29.99%, 50=40.33% 00:40:39.136 lat (msec) : 100=8.50% 00:40:39.136 cpu : usr=3.27%, sys=3.37%, ctx=336, majf=0, minf=1 00:40:39.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:39.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:39.136 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:39.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:39.136 job2: (groupid=0, jobs=1): err= 0: pid=670020: Tue Dec 10 00:22:23 2024 00:40:39.136 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(11.8MiB/1042msec) 00:40:39.136 slat (usec): min=2, max=14889, avg=123.86, stdev=859.77 00:40:39.137 clat (usec): min=4100, max=52054, avg=16242.35, stdev=9203.05 00:40:39.137 lat (usec): min=4109, max=55515, avg=16366.21, stdev=9241.96 00:40:39.137 clat percentiles (usec): 00:40:39.137 | 1.00th=[ 4948], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[10159], 00:40:39.137 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[16909], 00:40:39.137 | 70.00th=[18220], 80.00th=[22414], 90.00th=[27132], 95.00th=[32900], 00:40:39.137 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:40:39.137 | 99.99th=[52167] 00:40:39.137 write: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1042msec); 0 zone resets 00:40:39.137 slat (usec): min=3, max=17642, avg=193.66, stdev=1003.10 00:40:39.137 clat (usec): min=265, max=64924, avg=26633.21, stdev=14793.75 00:40:39.137 lat (usec): min=551, max=64937, avg=26826.87, stdev=14864.07 00:40:39.137 clat percentiles (usec): 00:40:39.137 | 1.00th=[ 1975], 5.00th=[ 3982], 10.00th=[ 9634], 20.00th=[10945], 00:40:39.137 | 30.00th=[19268], 40.00th=[23462], 50.00th=[23725], 60.00th=[28443], 00:40:39.137 | 70.00th=[33817], 80.00th=[42730], 90.00th=[48497], 95.00th=[50070], 00:40:39.137 | 99.00th=[61604], 99.50th=[63177], 99.90th=[64750], 99.95th=[64750], 00:40:39.137 | 99.99th=[64750] 00:40:39.137 bw ( KiB/s): min=10480, max=14096, per=19.54%, avg=12288.00, stdev=2556.90, samples=2 00:40:39.137 iops : min= 2620, max= 3524, avg=3072.00, stdev=639.22, samples=2 00:40:39.137 lat (usec) : 500=0.02%, 750=0.11%, 1000=0.23% 00:40:39.137 lat (msec) : 2=0.28%, 4=1.92%, 10=10.47%, 20=41.66%, 50=42.53% 00:40:39.137 lat (msec) : 100=2.79% 00:40:39.137 cpu : usr=2.11%, sys=4.71%, ctx=385, majf=0, minf=1 00:40:39.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:39.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:39.137 issued rwts: total=3023,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:39.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:39.137 job3: (groupid=0, jobs=1): err= 0: pid=670021: Tue Dec 10 00:22:23 2024 00:40:39.137 read: IOPS=7085, BW=27.7MiB/s (29.0MB/s)(27.7MiB/1002msec) 00:40:39.137 slat (usec): min=2, max=7033, avg=69.39, stdev=450.40 00:40:39.137 clat (usec): min=642, max=16479, avg=9200.90, stdev=1740.13 00:40:39.137 lat (usec): min=3348, max=16573, avg=9270.30, stdev=1760.68 00:40:39.137 clat percentiles (usec): 00:40:39.137 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7963], 00:40:39.137 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:40:39.137 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11338], 95.00th=[12256], 00:40:39.137 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15270], 99.95th=[16450], 00:40:39.137 | 99.99th=[16450] 00:40:39.137 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:40:39.137 slat (usec): min=2, max=6811, avg=61.08, stdev=364.02 00:40:39.137 clat (usec): min=600, max=15208, avg=8630.89, stdev=1681.32 00:40:39.137 lat (usec): min=614, max=15212, avg=8691.97, stdev=1709.25 00:40:39.137 clat percentiles (usec): 00:40:39.137 | 1.00th=[ 3949], 5.00th=[ 5211], 10.00th=[ 6325], 20.00th=[ 7898], 00:40:39.137 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:40:39.137 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[10945], 00:40:39.137 | 99.00th=[13042], 99.50th=[13960], 99.90th=[15139], 99.95th=[15270], 00:40:39.137 | 99.99th=[15270] 00:40:39.137 bw ( KiB/s): min=27416, max=29928, per=45.60%, avg=28672.00, stdev=1776.25, samples=2 00:40:39.137 iops : min= 6854, max= 7482, avg=7168.00, stdev=444.06, samples=2 00:40:39.137 lat (usec) : 750=0.06% 00:40:39.137 lat (msec) : 2=0.08%, 4=0.64%, 10=77.35%, 20=21.86% 00:40:39.137 cpu : usr=6.69%, sys=9.19%, ctx=579, majf=0, minf=1 00:40:39.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:39.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:39.137 issued rwts: total=7100,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:39.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:39.137 00:40:39.137 Run status group 0 (all jobs): 00:40:39.137 READ: bw=59.1MiB/s (61.9MB/s), 9.91MiB/s-27.7MiB/s (10.4MB/s-29.0MB/s), io=61.5MiB (64.5MB), run=1002-1042msec 00:40:39.137 WRITE: bw=61.4MiB/s (64.4MB/s), 11.5MiB/s-27.9MiB/s (12.1MB/s-29.3MB/s), io=64.0MiB (67.1MB), run=1002-1042msec 00:40:39.137 00:40:39.137 Disk stats (read/write): 00:40:39.137 nvme0n1: ios=2467/2560, merge=0/0, ticks=40384/61869, in_queue=102253, util=84.17% 00:40:39.137 nvme0n2: ios=2048/2351, merge=0/0, ticks=34247/64818, in_queue=99065, util=84.75% 00:40:39.137 nvme0n3: ios=2081/2127, merge=0/0, ticks=34148/64811, in_queue=98959, util=97.66% 00:40:39.137 nvme0n4: ios=5966/6144, merge=0/0, ticks=30354/28317, in_queue=58671, util=89.42% 00:40:39.137 00:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:39.137 [global] 00:40:39.137 thread=1 00:40:39.137 invalidate=1 00:40:39.137 rw=randwrite 00:40:39.137 time_based=1 00:40:39.137 runtime=1 00:40:39.137 ioengine=libaio 00:40:39.137 direct=1 00:40:39.137 bs=4096 00:40:39.137 iodepth=128 00:40:39.137 norandommap=0 00:40:39.137 numjobs=1 00:40:39.137 00:40:39.137 verify_dump=1 00:40:39.137 verify_backlog=512 00:40:39.137 verify_state_save=0 00:40:39.137 do_verify=1 00:40:39.137 verify=crc32c-intel 00:40:39.137 [job0] 00:40:39.137 filename=/dev/nvme0n1 00:40:39.137 [job1] 00:40:39.137 filename=/dev/nvme0n2 00:40:39.137 [job2] 00:40:39.137 filename=/dev/nvme0n3 00:40:39.137 [job3] 00:40:39.137 filename=/dev/nvme0n4 00:40:39.137 Could not set queue depth (nvme0n1) 00:40:39.137 Could not set queue depth (nvme0n2) 00:40:39.137 Could not set queue depth (nvme0n3) 00:40:39.137 Could not set queue depth (nvme0n4) 00:40:39.396 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:39.396 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:39.396 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:39.396 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:39.396 fio-3.35 00:40:39.396 Starting 4 threads 00:40:40.779 00:40:40.779 job0: (groupid=0, jobs=1): err= 0: pid=670435: Tue Dec 10 00:22:25 2024 00:40:40.779 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:40.779 slat (usec): min=2, max=31853, avg=100.73, stdev=735.69 00:40:40.779 clat (usec): min=462, max=83656, avg=13698.86, stdev=11364.92 00:40:40.779 lat (usec): min=2625, max=86860, avg=13799.59, stdev=11411.61 00:40:40.779 clat percentiles (usec): 00:40:40.779 | 1.00th=[ 5145], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9896], 00:40:40.779 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:40:40.779 | 70.00th=[11207], 80.00th=[12125], 90.00th=[19268], 95.00th=[38011], 00:40:40.779 | 99.00th=[72877], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:40:40.779 | 99.99th=[83362] 00:40:40.779 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:40:40.779 slat (usec): min=2, max=18790, avg=110.92, stdev=661.78 00:40:40.779 clat (usec): min=7022, max=97998, avg=13639.39, stdev=13386.76 00:40:40.779 lat (usec): min=7089, max=98022, avg=13750.31, stdev=13476.69 00:40:40.779 clat percentiles (usec): 00:40:40.779 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9503], 00:40:40.779 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:40:40.779 | 70.00th=[10814], 80.00th=[11338], 90.00th=[17433], 95.00th=[35390], 00:40:40.779 | 99.00th=[91751], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:40:40.779 | 99.99th=[98042] 00:40:40.779 bw ( KiB/s): min=12608, max=24256, per=24.83%, avg=18432.00, stdev=8236.38, samples=2 00:40:40.779 iops : min= 3152, max= 6064, avg=4608.00, stdev=2059.09, samples=2 00:40:40.779 lat (usec) : 500=0.01% 00:40:40.779 lat (msec) : 4=0.35%, 10=33.49%, 20=57.41%, 50=5.66%, 100=3.08% 00:40:40.779 cpu : usr=3.89%, sys=4.29%, ctx=526, majf=0, minf=1 00:40:40.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:40.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:40.779 issued rwts: total=4608,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:40.779 job1: (groupid=0, jobs=1): err= 0: pid=670436: Tue Dec 10 00:22:25 2024 00:40:40.779 read: IOPS=4289, BW=16.8MiB/s (17.6MB/s)(17.5MiB/1043msec) 00:40:40.779 slat (nsec): min=1773, max=22467k, avg=88864.20, stdev=762117.41 00:40:40.779 clat (usec): min=1765, max=57726, avg=13927.48, stdev=9854.87 00:40:40.779 lat (usec): min=1773, max=63846, avg=14016.35, stdev=9902.45 00:40:40.779 clat percentiles (usec): 00:40:40.779 | 1.00th=[ 4080], 5.00th=[ 5342], 10.00th=[ 7046], 20.00th=[ 8848], 00:40:40.779 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:40:40.779 | 70.00th=[14091], 80.00th=[17695], 90.00th=[22938], 95.00th=[31065], 00:40:40.779 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:40:40.779 | 99.99th=[57934] 00:40:40.779 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:40:40.779 slat (usec): min=2, max=41677, avg=106.29, stdev=1058.51 00:40:40.779 clat (usec): min=727, max=55593, avg=15204.09, stdev=9943.91 00:40:40.779 lat (usec): min=801, max=55602, avg=15310.38, stdev=10009.64 00:40:40.779 clat percentiles (usec): 00:40:40.779 | 1.00th=[ 3523], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 8848], 00:40:40.779 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11994], 60.00th=[13042], 00:40:40.779 | 70.00th=[17433], 80.00th=[19792], 90.00th=[28181], 95.00th=[43779], 00:40:40.780 | 99.00th=[46400], 99.50th=[46924], 99.90th=[49021], 99.95th=[54789], 00:40:40.780 | 99.99th=[55837] 00:40:40.780 bw ( KiB/s): min=16384, max=20480, per=24.83%, avg=18432.00, stdev=2896.31, samples=2 00:40:40.780 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:40:40.780 lat (usec) : 750=0.01% 00:40:40.780 lat (msec) : 2=0.30%, 4=1.15%, 10=35.26%, 20=47.37%, 50=14.36% 00:40:40.780 lat (msec) : 100=1.56% 00:40:40.780 cpu : usr=4.03%, sys=7.29%, ctx=258, majf=0, minf=2 00:40:40.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:40.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:40.780 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:40.780 job2: (groupid=0, jobs=1): err= 0: pid=670438: Tue Dec 10 00:22:25 2024 00:40:40.780 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:40.780 slat (usec): min=2, max=10514, avg=97.94, stdev=622.87 00:40:40.780 clat (usec): min=6172, max=29366, avg=12703.68, stdev=3581.79 00:40:40.780 lat (usec): min=6179, max=29374, avg=12801.63, stdev=3611.42 00:40:40.780 clat percentiles (usec): 00:40:40.780 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:40:40.780 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[12911], 00:40:40.780 | 70.00th=[13435], 80.00th=[14222], 90.00th=[17433], 95.00th=[19530], 00:40:40.780 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28443], 99.95th=[29230], 00:40:40.780 | 99.99th=[29492] 00:40:40.780 write: IOPS=5001, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec); 0 zone resets 00:40:40.780 slat (usec): min=3, max=14032, avg=101.14, stdev=646.77 00:40:40.780 clat (usec): min=594, max=65373, avg=13675.82, stdev=8277.67 00:40:40.780 lat (usec): min=2343, max=65377, avg=13776.96, stdev=8330.79 00:40:40.780 clat percentiles (usec): 00:40:40.780 | 1.00th=[ 6718], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10552], 00:40:40.780 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:40:40.780 | 70.00th=[12387], 80.00th=[13173], 90.00th=[20579], 95.00th=[28181], 00:40:40.780 | 99.00th=[62653], 99.50th=[64226], 99.90th=[65274], 99.95th=[65274], 00:40:40.780 | 99.99th=[65274] 00:40:40.780 bw ( KiB/s): min=18280, max=20832, per=26.35%, avg=19556.00, stdev=1804.54, samples=2 00:40:40.780 iops : min= 4570, max= 5208, avg=4889.00, stdev=451.13, samples=2 00:40:40.780 lat (usec) : 750=0.01% 00:40:40.780 lat (msec) : 4=0.07%, 10=16.70%, 20=74.87%, 50=7.45%, 100=0.90% 00:40:40.780 cpu : usr=5.29%, sys=6.99%, ctx=469, majf=0, minf=1 00:40:40.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:40.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:40.780 issued rwts: total=4608,5017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:40.780 job3: (groupid=0, jobs=1): err= 0: pid=670439: Tue Dec 10 00:22:25 2024 00:40:40.780 read: IOPS=4693, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1007msec) 00:40:40.780 slat (usec): min=2, max=20994, avg=112.58, stdev=936.96 00:40:40.780 clat (usec): min=3831, max=38982, avg=14459.62, stdev=5935.11 00:40:40.780 lat (usec): min=3839, max=39011, avg=14572.20, stdev=5990.43 00:40:40.780 clat percentiles (usec): 00:40:40.780 | 1.00th=[ 6915], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10028], 00:40:40.780 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[13435], 00:40:40.780 | 70.00th=[17433], 80.00th=[18744], 90.00th=[23462], 95.00th=[27395], 00:40:40.780 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33817], 99.95th=[34341], 00:40:40.780 | 99.99th=[39060] 00:40:40.780 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:40:40.780 slat (usec): min=3, max=15188, avg=81.58, stdev=656.89 00:40:40.780 clat (usec): min=221, max=32423, avg=11594.93, stdev=4013.81 00:40:40.780 lat (usec): min=857, max=32444, avg=11676.51, stdev=4065.89 00:40:40.780 clat percentiles (usec): 00:40:40.780 | 1.00th=[ 3359], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 8979], 00:40:40.780 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:40:40.780 | 70.00th=[12387], 80.00th=[13566], 90.00th=[17433], 95.00th=[18744], 00:40:40.780 | 99.00th=[23462], 99.50th=[27919], 99.90th=[28181], 99.95th=[32375], 00:40:40.780 | 99.99th=[32375] 00:40:40.780 bw ( KiB/s): min=20408, max=20480, per=27.54%, avg=20444.00, stdev=50.91, samples=2 00:40:40.780 iops : min= 5102, max= 5120, avg=5111.00, stdev=12.73, samples=2 00:40:40.780 lat (usec) : 250=0.01%, 1000=0.07% 00:40:40.780 lat (msec) : 2=0.07%, 4=0.60%, 10=23.74%, 20=65.28%, 50=10.24% 00:40:40.780 cpu : usr=5.86%, sys=8.55%, ctx=357, majf=0, minf=1 00:40:40.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:40.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:40.780 issued rwts: total=4726,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:40.780 00:40:40.780 Run status group 0 (all jobs): 00:40:40.780 READ: bw=69.0MiB/s (72.3MB/s), 16.8MiB/s-18.3MiB/s (17.6MB/s-19.2MB/s), io=71.9MiB (75.4MB), run=1003-1043msec 00:40:40.780 WRITE: bw=72.5MiB/s (76.0MB/s), 17.3MiB/s-19.9MiB/s (18.1MB/s-20.8MB/s), io=75.6MiB (79.3MB), run=1003-1043msec 00:40:40.780 00:40:40.780 Disk stats (read/write): 00:40:40.780 nvme0n1: ios=3414/3584, merge=0/0, ticks=12656/13992, in_queue=26648, util=98.30% 00:40:40.780 nvme0n2: ios=3983/4096, merge=0/0, ticks=42489/45872, in_queue=88361, util=99.49% 00:40:40.780 nvme0n3: ios=4148/4287, merge=0/0, ticks=29177/29335, in_queue=58512, util=99.68% 00:40:40.780 nvme0n4: ios=3608/4039, merge=0/0, ticks=54778/45639, in_queue=100417, util=99.89% 00:40:40.780 00:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:40.780 00:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=670700 00:40:40.780 00:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:40.780 00:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:40.780 [global] 00:40:40.780 thread=1 00:40:40.780 invalidate=1 00:40:40.780 rw=read 00:40:40.780 time_based=1 00:40:40.780 runtime=10 00:40:40.780 ioengine=libaio 00:40:40.780 direct=1 00:40:40.780 bs=4096 00:40:40.780 iodepth=1 00:40:40.780 norandommap=1 00:40:40.780 numjobs=1 00:40:40.780 00:40:40.780 [job0] 00:40:40.780 filename=/dev/nvme0n1 00:40:40.780 [job1] 00:40:40.780 filename=/dev/nvme0n2 00:40:40.780 [job2] 00:40:40.781 filename=/dev/nvme0n3 00:40:40.781 [job3] 00:40:40.781 filename=/dev/nvme0n4 00:40:40.781 Could not set queue depth (nvme0n1) 00:40:40.781 Could not set queue depth (nvme0n2) 00:40:40.781 Could not set queue depth (nvme0n3) 00:40:40.781 Could not set queue depth (nvme0n4) 00:40:41.348 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.348 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.348 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.348 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.348 fio-3.35 00:40:41.348 Starting 4 threads 00:40:43.885 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:43.885 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45158400, buflen=4096 00:40:43.885 fio: pid=670863, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:43.885 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:44.144 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:44.144 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:44.144 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:40:44.144 fio: pid=670862, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:44.404 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=339968, buflen=4096 00:40:44.404 fio: pid=670860, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:44.404 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:44.404 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:44.664 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:40:44.664 fio: pid=670861, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:44.664 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:44.664 00:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:44.664 00:40:44.664 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=670860: Tue Dec 10 00:22:28 2024 00:40:44.664 read: IOPS=27, BW=109KiB/s (112kB/s)(332KiB/3037msec) 00:40:44.664 slat (usec): min=10, max=15757, avg=211.78, stdev=1716.65 00:40:44.664 clat (usec): min=265, max=41497, avg=36048.46, stdev=13301.69 00:40:44.664 lat (usec): min=291, max=56969, avg=36262.49, stdev=13485.52 00:40:44.664 clat percentiles (usec): 00:40:44.664 | 1.00th=[ 265], 5.00th=[ 326], 10.00th=[ 363], 20.00th=[40633], 00:40:44.664 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:44.664 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:44.664 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:44.664 | 99.99th=[41681] 00:40:44.664 bw ( KiB/s): min= 104, max= 120, per=0.79%, avg=110.40, stdev= 6.69, samples=5 00:40:44.664 iops : min= 26, max= 30, avg=27.60, stdev= 1.67, samples=5 00:40:44.664 lat (usec) : 500=11.90% 00:40:44.664 lat (msec) : 50=86.90% 00:40:44.664 cpu : usr=0.00%, sys=0.13%, ctx=86, majf=0, minf=1 00:40:44.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:44.664 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=670861: Tue Dec 10 00:22:28 2024 00:40:44.664 read: IOPS=25, BW=99.8KiB/s (102kB/s)(324KiB/3247msec) 00:40:44.664 slat (usec): min=11, max=12777, avg=275.57, stdev=1641.07 00:40:44.664 clat (usec): min=369, max=42039, avg=39482.72, stdev=7714.80 00:40:44.664 lat (usec): min=396, max=54037, avg=39761.37, stdev=7946.27 00:40:44.664 clat percentiles (usec): 00:40:44.664 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:44.664 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:44.664 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:44.664 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:44.664 | 99.99th=[42206] 00:40:44.664 bw ( KiB/s): min= 90, max= 112, per=0.72%, avg=100.33, stdev= 7.84, samples=6 00:40:44.664 iops : min= 22, max= 28, avg=25.00, stdev= 2.10, samples=6 00:40:44.664 lat (usec) : 500=3.66% 00:40:44.664 lat (msec) : 50=95.12% 00:40:44.664 cpu : usr=0.00%, sys=0.12%, ctx=84, majf=0, minf=2 00:40:44.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:44.664 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=670862: Tue Dec 10 00:22:28 2024 00:40:44.664 read: IOPS=25, BW=101KiB/s (103kB/s)(288KiB/2854msec) 00:40:44.664 slat (nsec): min=12567, max=62376, avg=26384.40, stdev=5144.75 00:40:44.664 clat (usec): min=298, max=42285, avg=39304.22, stdev=8169.12 00:40:44.664 lat (usec): min=325, max=42311, avg=39330.62, stdev=8166.52 00:40:44.664 clat percentiles (usec): 00:40:44.664 | 1.00th=[ 297], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:40:44.664 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:44.664 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:44.664 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:44.664 | 99.99th=[42206] 00:40:44.664 bw ( KiB/s): min= 96, max= 104, per=0.71%, avg=99.20, stdev= 4.38, samples=5 00:40:44.664 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:40:44.664 lat (usec) : 500=2.74%, 750=1.37% 00:40:44.664 lat (msec) : 50=94.52% 00:40:44.664 cpu : usr=0.00%, sys=0.14%, ctx=74, majf=0, minf=2 00:40:44.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:44.664 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=670863: Tue Dec 10 00:22:28 2024 00:40:44.664 read: IOPS=4205, BW=16.4MiB/s (17.2MB/s)(43.1MiB/2622msec) 00:40:44.664 slat (nsec): min=8336, max=35876, avg=9277.08, stdev=1149.41 00:40:44.664 clat (usec): min=191, max=4289, avg=225.02, stdev=52.27 00:40:44.664 lat (usec): min=201, max=4298, avg=234.30, stdev=52.29 00:40:44.664 clat percentiles (usec): 00:40:44.664 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:40:44.664 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 225], 00:40:44.664 | 70.00th=[ 227], 80.00th=[ 229], 90.00th=[ 233], 95.00th=[ 237], 00:40:44.664 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 363], 99.95th=[ 420], 00:40:44.664 | 99.99th=[ 3752] 00:40:44.664 bw ( KiB/s): min=16744, max=17192, per=100.00%, avg=16974.40, stdev=166.99, samples=5 00:40:44.664 iops : min= 4186, max= 4298, avg=4243.60, stdev=41.75, samples=5 00:40:44.664 lat (usec) : 250=99.02%, 500=0.94%, 750=0.01% 00:40:44.664 lat (msec) : 4=0.01%, 10=0.01% 00:40:44.664 cpu : usr=1.76%, sys=4.85%, ctx=11027, majf=0, minf=2 00:40:44.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:44.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:44.664 issued rwts: total=11026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:44.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:44.664 00:40:44.664 Run status group 0 (all jobs): 00:40:44.664 READ: bw=13.5MiB/s (14.2MB/s), 99.8KiB/s-16.4MiB/s (102kB/s-17.2MB/s), io=44.0MiB (46.1MB), run=2622-3247msec 00:40:44.664 00:40:44.664 Disk stats (read/write): 00:40:44.664 nvme0n1: ios=76/0, merge=0/0, ticks=2789/0, in_queue=2789, util=94.46% 00:40:44.664 nvme0n2: ios=99/0, merge=0/0, ticks=3087/0, in_queue=3087, util=96.18% 00:40:44.664 nvme0n3: ios=71/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.29% 00:40:44.664 nvme0n4: ios=10927/0, merge=0/0, ticks=2365/0, in_queue=2365, util=96.45% 00:40:44.924 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:44.924 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:44.924 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:44.924 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:45.183 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:45.183 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:45.442 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:45.442 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:45.701 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:45.701 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 670700 00:40:45.701 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:45.701 00:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:45.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:45.701 nvmf hotplug test: fio failed as expected 00:40:45.701 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:45.960 rmmod nvme_tcp 00:40:45.960 rmmod nvme_fabrics 00:40:45.960 rmmod nvme_keyring 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 667904 ']' 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 667904 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 667904 ']' 00:40:45.960 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 667904 00:40:45.961 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:45.961 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:45.961 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667904 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667904' 00:40:46.220 killing process with pid 667904 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 667904 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 667904 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:46.220 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:48.758 00:40:48.758 real 0m28.221s 00:40:48.758 user 1m46.393s 00:40:48.758 sys 0m14.934s 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:48.758 ************************************ 00:40:48.758 END TEST nvmf_fio_target 00:40:48.758 ************************************ 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:48.758 ************************************ 00:40:48.758 START TEST nvmf_bdevio 00:40:48.758 ************************************ 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:48.758 * Looking for test storage... 00:40:48.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:48.758 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:48.758 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.759 --rc genhtml_branch_coverage=1 00:40:48.759 --rc genhtml_function_coverage=1 00:40:48.759 --rc genhtml_legend=1 00:40:48.759 --rc geninfo_all_blocks=1 00:40:48.759 --rc geninfo_unexecuted_blocks=1 00:40:48.759 00:40:48.759 ' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.759 --rc genhtml_branch_coverage=1 00:40:48.759 --rc genhtml_function_coverage=1 00:40:48.759 --rc genhtml_legend=1 00:40:48.759 --rc geninfo_all_blocks=1 00:40:48.759 --rc geninfo_unexecuted_blocks=1 00:40:48.759 00:40:48.759 ' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.759 --rc genhtml_branch_coverage=1 00:40:48.759 --rc genhtml_function_coverage=1 00:40:48.759 --rc genhtml_legend=1 00:40:48.759 --rc geninfo_all_blocks=1 00:40:48.759 --rc geninfo_unexecuted_blocks=1 00:40:48.759 00:40:48.759 ' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:48.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.759 --rc genhtml_branch_coverage=1 00:40:48.759 --rc genhtml_function_coverage=1 00:40:48.759 --rc genhtml_legend=1 00:40:48.759 --rc geninfo_all_blocks=1 00:40:48.759 --rc geninfo_unexecuted_blocks=1 00:40:48.759 00:40:48.759 ' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:48.759 00:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:56.884 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:56.884 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:56.884 Found net devices under 0000:af:00.0: cvl_0_0 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:56.884 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:56.885 Found net devices under 0000:af:00.1: cvl_0_1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:56.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:56.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:40:56.885 00:40:56.885 --- 10.0.0.2 ping statistics --- 00:40:56.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.885 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:56.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:56.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:40:56.885 00:40:56.885 --- 10.0.0.1 ping statistics --- 00:40:56.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.885 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=675356 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 675356 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 675356 ']' 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:56.885 00:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:56.885 [2024-12-10 00:22:40.419911] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:56.885 [2024-12-10 00:22:40.420860] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:40:56.885 [2024-12-10 00:22:40.420894] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.885 [2024-12-10 00:22:40.515035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:56.885 [2024-12-10 00:22:40.556233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:56.885 [2024-12-10 00:22:40.556272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:56.885 [2024-12-10 00:22:40.556282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:56.885 [2024-12-10 00:22:40.556291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:56.885 [2024-12-10 00:22:40.556297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:56.885 [2024-12-10 00:22:40.557987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:56.885 [2024-12-10 00:22:40.558095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:56.885 [2024-12-10 00:22:40.558130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:56.885 [2024-12-10 00:22:40.558132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:56.885 [2024-12-10 00:22:40.625390] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:56.885 [2024-12-10 00:22:40.625857] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:56.885 [2024-12-10 00:22:40.626072] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:56.885 [2024-12-10 00:22:40.626313] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:56.885 [2024-12-10 00:22:40.626357] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:56.885 [2024-12-10 00:22:41.307132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.885 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 Malloc0 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:57.145 [2024-12-10 00:22:41.395398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:57.145 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:57.145 { 00:40:57.145 "params": { 00:40:57.145 "name": "Nvme$subsystem", 00:40:57.145 "trtype": "$TEST_TRANSPORT", 00:40:57.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:57.145 "adrfam": "ipv4", 00:40:57.145 "trsvcid": "$NVMF_PORT", 00:40:57.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:57.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:57.145 "hdgst": ${hdgst:-false}, 00:40:57.145 "ddgst": ${ddgst:-false} 00:40:57.145 }, 00:40:57.146 "method": "bdev_nvme_attach_controller" 00:40:57.146 } 00:40:57.146 EOF 00:40:57.146 )") 00:40:57.146 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:57.146 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:57.146 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:57.146 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:57.146 "params": { 00:40:57.146 "name": "Nvme1", 00:40:57.146 "trtype": "tcp", 00:40:57.146 "traddr": "10.0.0.2", 00:40:57.146 "adrfam": "ipv4", 00:40:57.146 "trsvcid": "4420", 00:40:57.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:57.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:57.146 "hdgst": false, 00:40:57.146 "ddgst": false 00:40:57.146 }, 00:40:57.146 "method": "bdev_nvme_attach_controller" 00:40:57.146 }' 00:40:57.146 [2024-12-10 00:22:41.450847] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:40:57.146 [2024-12-10 00:22:41.450904] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675522 ] 00:40:57.146 [2024-12-10 00:22:41.545836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:57.146 [2024-12-10 00:22:41.587779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.146 [2024-12-10 00:22:41.587890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.146 [2024-12-10 00:22:41.587891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:57.405 I/O targets: 00:40:57.405 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:57.405 00:40:57.405 00:40:57.405 CUnit - A unit testing framework for C - Version 2.1-3 00:40:57.406 http://cunit.sourceforge.net/ 00:40:57.406 00:40:57.406 00:40:57.406 Suite: bdevio tests on: Nvme1n1 00:40:57.665 Test: blockdev write read block ...passed 00:40:57.665 Test: blockdev write zeroes read block ...passed 00:40:57.665 Test: blockdev write zeroes read no split ...passed 00:40:57.665 Test: blockdev write zeroes read split ...passed 00:40:57.665 Test: blockdev write zeroes read split partial ...passed 00:40:57.665 Test: blockdev reset ...[2024-12-10 00:22:42.056744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:57.665 [2024-12-10 00:22:42.056808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75d590 (9): Bad file descriptor 00:40:57.925 [2024-12-10 00:22:42.189891] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:57.925 passed 00:40:57.925 Test: blockdev write read 8 blocks ...passed 00:40:57.925 Test: blockdev write read size > 128k ...passed 00:40:57.925 Test: blockdev write read invalid size ...passed 00:40:57.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:57.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:57.925 Test: blockdev write read max offset ...passed 00:40:57.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:57.925 Test: blockdev writev readv 8 blocks ...passed 00:40:57.925 Test: blockdev writev readv 30 x 1block ...passed 00:40:58.184 Test: blockdev writev readv block ...passed 00:40:58.184 Test: blockdev writev readv size > 128k ...passed 00:40:58.184 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:58.184 Test: blockdev comparev and writev ...[2024-12-10 00:22:42.443096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.443145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.443447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.443474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.443772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.443799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.443808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.444121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.444135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:58.184 [2024-12-10 00:22:42.444145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:58.184 passed 00:40:58.184 Test: blockdev nvme passthru rw ...passed 00:40:58.184 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:22:42.526208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:58.184 [2024-12-10 00:22:42.526227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.526344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:58.184 [2024-12-10 00:22:42.526356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.526467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:58.184 [2024-12-10 00:22:42.526478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:58.184 [2024-12-10 00:22:42.526595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:58.184 [2024-12-10 00:22:42.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:58.184 passed 00:40:58.184 Test: blockdev nvme admin passthru ...passed 00:40:58.184 Test: blockdev copy ...passed 00:40:58.184 00:40:58.184 Run Summary: Type Total Ran Passed Failed Inactive 00:40:58.184 suites 1 1 n/a 0 0 00:40:58.184 tests 23 23 23 0 0 00:40:58.184 asserts 152 152 152 0 n/a 00:40:58.184 00:40:58.184 Elapsed time = 1.434 seconds 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:58.444 rmmod nvme_tcp 00:40:58.444 rmmod nvme_fabrics 00:40:58.444 rmmod nvme_keyring 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 675356 ']' 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 675356 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 675356 ']' 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 675356 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675356 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675356' 00:40:58.444 killing process with pid 675356 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 675356 00:40:58.444 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 675356 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.704 00:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:01.238 00:41:01.238 real 0m12.316s 00:41:01.238 user 0m10.414s 00:41:01.238 sys 0m6.933s 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:01.238 ************************************ 00:41:01.238 END TEST nvmf_bdevio 00:41:01.238 ************************************ 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:01.238 00:41:01.238 real 4m59.998s 00:41:01.238 user 9m23.501s 00:41:01.238 sys 2m22.933s 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.238 00:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:01.238 ************************************ 00:41:01.238 END TEST nvmf_target_core_interrupt_mode 00:41:01.238 ************************************ 00:41:01.238 00:22:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:01.238 00:22:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:01.238 00:22:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.238 00:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:01.238 ************************************ 00:41:01.238 START TEST nvmf_interrupt 00:41:01.238 ************************************ 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:01.238 * Looking for test storage... 00:41:01.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:01.238 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.239 --rc genhtml_branch_coverage=1 00:41:01.239 --rc genhtml_function_coverage=1 00:41:01.239 --rc genhtml_legend=1 00:41:01.239 --rc geninfo_all_blocks=1 00:41:01.239 --rc geninfo_unexecuted_blocks=1 00:41:01.239 00:41:01.239 ' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.239 --rc genhtml_branch_coverage=1 00:41:01.239 --rc genhtml_function_coverage=1 00:41:01.239 --rc genhtml_legend=1 00:41:01.239 --rc geninfo_all_blocks=1 00:41:01.239 --rc geninfo_unexecuted_blocks=1 00:41:01.239 00:41:01.239 ' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.239 --rc genhtml_branch_coverage=1 00:41:01.239 --rc genhtml_function_coverage=1 00:41:01.239 --rc genhtml_legend=1 00:41:01.239 --rc geninfo_all_blocks=1 00:41:01.239 --rc geninfo_unexecuted_blocks=1 00:41:01.239 00:41:01.239 ' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:01.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.239 --rc genhtml_branch_coverage=1 00:41:01.239 --rc genhtml_function_coverage=1 00:41:01.239 --rc genhtml_legend=1 00:41:01.239 --rc geninfo_all_blocks=1 00:41:01.239 --rc geninfo_unexecuted_blocks=1 00:41:01.239 00:41:01.239 ' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:01.239 00:22:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:09.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:09.368 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:09.369 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:09.369 Found net devices under 0000:af:00.0: cvl_0_0 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:09.369 Found net devices under 0000:af:00.1: cvl_0_1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:09.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:09.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:41:09.369 00:41:09.369 --- 10.0.0.2 ping statistics --- 00:41:09.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:09.369 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:09.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:09.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:41:09.369 00:41:09.369 --- 10.0.0.1 ping statistics --- 00:41:09.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:09.369 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=679358 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 679358 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 679358 ']' 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:09.369 00:22:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.369 [2024-12-10 00:22:52.797582] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:09.369 [2024-12-10 00:22:52.798544] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:41:09.369 [2024-12-10 00:22:52.798580] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:09.369 [2024-12-10 00:22:52.895572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:09.369 [2024-12-10 00:22:52.936340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:09.369 [2024-12-10 00:22:52.936378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:09.369 [2024-12-10 00:22:52.936388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:09.369 [2024-12-10 00:22:52.936396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:09.369 [2024-12-10 00:22:52.936404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:09.369 [2024-12-10 00:22:52.937704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:09.369 [2024-12-10 00:22:52.937705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.369 [2024-12-10 00:22:53.005065] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:09.369 [2024-12-10 00:22:53.005550] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:09.369 [2024-12-10 00:22:53.005753] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:09.369 5000+0 records in 00:41:09.369 5000+0 records out 00:41:09.369 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0236473 s, 433 MB/s 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.369 AIO0 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.369 [2024-12-10 00:22:53.738569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:09.369 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:09.370 [2024-12-10 00:22:53.778937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 679358 0 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 0 idle 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:09.370 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:09.629 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679358 root 20 0 128.2g 45696 34048 S 0.0 0.1 0:00.28 reactor_0' 00:41:09.629 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679358 root 20 0 128.2g 45696 34048 S 0.0 0.1 0:00.28 reactor_0 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 679358 1 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 1 idle 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:09.630 00:22:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679386 root 20 0 128.2g 45696 34048 S 0.0 0.1 0:00.00 reactor_1' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679386 root 20 0 128.2g 45696 34048 S 0.0 0.1 0:00.00 reactor_1 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=679650 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 679358 0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 679358 0 busy 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679358 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.46 reactor_0' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679358 root 20 0 128.2g 46592 34048 R 99.9 0.1 0:00.46 reactor_0 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:09.889 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 679358 1 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 679358 1 busy 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679386 root 20 0 128.2g 46592 34048 R 93.3 0.1 0:00.27 reactor_1' 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679386 root 20 0 128.2g 46592 34048 R 93.3 0.1 0:00.27 reactor_1 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:10.148 00:22:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 679650 00:41:20.125 Initializing NVMe Controllers 00:41:20.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:20.125 Controller IO queue size 256, less than required. 00:41:20.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:20.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:20.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:20.125 Initialization complete. Launching workers. 00:41:20.125 ======================================================== 00:41:20.125 Latency(us) 00:41:20.125 Device Information : IOPS MiB/s Average min max 00:41:20.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16570.80 64.73 15456.45 3792.88 33007.17 00:41:20.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16749.20 65.43 15287.66 8207.27 30400.67 00:41:20.125 ======================================================== 00:41:20.125 Total : 33320.00 130.16 15371.60 3792.88 33007.17 00:41:20.125 00:41:20.125 [2024-12-10 00:23:04.303182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339970 is same with the state(6) to be set 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 679358 0 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 0 idle 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679358 root 20 0 128.2g 46592 34048 S 6.7 0.1 0:20.28 reactor_0' 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679358 root 20 0 128.2g 46592 34048 S 6.7 0.1 0:20.28 reactor_0 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 679358 1 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 1 idle 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:20.125 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679386 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:10.00 reactor_1' 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679386 root 20 0 128.2g 46592 34048 S 0.0 0.1 0:10.00 reactor_1 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:20.385 00:23:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:20.953 00:23:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:20.953 00:23:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:20.953 00:23:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:20.953 00:23:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:20.953 00:23:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 679358 0 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 0 idle 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:22.857 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679358 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:20.63 reactor_0' 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679358 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:20.63 reactor_0 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 679358 1 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 679358 1 idle 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=679358 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 679358 -w 256 00:41:23.116 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 679386 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.14 reactor_1' 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 679386 root 20 0 128.2g 77056 34048 S 0.0 0.1 0:10.14 reactor_1 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:23.375 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:23.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:23.633 rmmod nvme_tcp 00:41:23.633 rmmod nvme_fabrics 00:41:23.633 rmmod nvme_keyring 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 679358 ']' 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 679358 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 679358 ']' 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 679358 00:41:23.633 00:23:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:23.633 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:23.633 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 679358 00:41:23.633 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:23.634 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:23.634 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 679358' 00:41:23.634 killing process with pid 679358 00:41:23.634 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 679358 00:41:23.634 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 679358 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:23.892 00:23:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.499 00:23:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:26.499 00:41:26.499 real 0m25.082s 00:41:26.499 user 0m39.550s 00:41:26.499 sys 0m10.371s 00:41:26.499 00:23:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.499 00:23:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:26.499 ************************************ 00:41:26.499 END TEST nvmf_interrupt 00:41:26.499 ************************************ 00:41:26.499 00:41:26.499 real 30m2.013s 00:41:26.499 user 59m5.662s 00:41:26.499 sys 11m35.186s 00:41:26.499 00:23:10 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.499 00:23:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.499 ************************************ 00:41:26.499 END TEST nvmf_tcp 00:41:26.499 ************************************ 00:41:26.499 00:23:10 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:26.499 00:23:10 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:26.499 00:23:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:26.499 00:23:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.499 00:23:10 -- common/autotest_common.sh@10 -- # set +x 00:41:26.499 ************************************ 00:41:26.499 START TEST spdkcli_nvmf_tcp 00:41:26.500 ************************************ 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:26.500 * Looking for test storage... 00:41:26.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:26.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.500 --rc genhtml_branch_coverage=1 00:41:26.500 --rc genhtml_function_coverage=1 00:41:26.500 --rc genhtml_legend=1 00:41:26.500 --rc geninfo_all_blocks=1 00:41:26.500 --rc geninfo_unexecuted_blocks=1 00:41:26.500 00:41:26.500 ' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:26.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.500 --rc genhtml_branch_coverage=1 00:41:26.500 --rc genhtml_function_coverage=1 00:41:26.500 --rc genhtml_legend=1 00:41:26.500 --rc geninfo_all_blocks=1 00:41:26.500 --rc geninfo_unexecuted_blocks=1 00:41:26.500 00:41:26.500 ' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:26.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.500 --rc genhtml_branch_coverage=1 00:41:26.500 --rc genhtml_function_coverage=1 00:41:26.500 --rc genhtml_legend=1 00:41:26.500 --rc geninfo_all_blocks=1 00:41:26.500 --rc geninfo_unexecuted_blocks=1 00:41:26.500 00:41:26.500 ' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:26.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.500 --rc genhtml_branch_coverage=1 00:41:26.500 --rc genhtml_function_coverage=1 00:41:26.500 --rc genhtml_legend=1 00:41:26.500 --rc geninfo_all_blocks=1 00:41:26.500 --rc geninfo_unexecuted_blocks=1 00:41:26.500 00:41:26.500 ' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:26.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=682452 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 682452 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 682452 ']' 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.500 00:23:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:26.501 [2024-12-10 00:23:10.767780] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:41:26.501 [2024-12-10 00:23:10.767836] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682452 ] 00:41:26.501 [2024-12-10 00:23:10.855985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:26.501 [2024-12-10 00:23:10.897364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.501 [2024-12-10 00:23:10.897367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.152 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.152 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:27.152 00:23:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:27.152 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:27.152 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:27.411 00:23:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:27.411 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:27.411 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:27.411 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:27.411 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:27.411 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:27.411 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:27.411 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:27.411 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:27.411 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:27.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:27.411 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:27.411 ' 00:41:29.943 [2024-12-10 00:23:14.384448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:31.319 [2024-12-10 00:23:15.720841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:33.855 [2024-12-10 00:23:18.204484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:36.390 [2024-12-10 00:23:20.367392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:37.769 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:37.769 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:37.769 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:37.769 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:37.769 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:37.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:37.769 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:37.769 00:23:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:38.337 00:23:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:38.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:38.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:38.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:38.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:38.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:38.337 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:38.337 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:38.337 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:38.337 ' 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:44.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:44.906 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:44.906 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:44.906 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:44.906 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:44.906 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:44.906 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:44.907 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:44.907 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 682452 ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682452' 00:41:44.907 killing process with pid 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 682452 ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 682452 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 682452 ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 682452 00:41:44.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (682452) - No such process 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 682452 is not found' 00:41:44.907 Process with pid 682452 is not found 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:44.907 00:41:44.907 real 0m18.082s 00:41:44.907 user 0m39.663s 00:41:44.907 sys 0m1.066s 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.907 00:23:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.907 ************************************ 00:41:44.907 END TEST spdkcli_nvmf_tcp 00:41:44.907 ************************************ 00:41:44.907 00:23:28 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:44.907 00:23:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:44.907 00:23:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:44.907 00:23:28 -- common/autotest_common.sh@10 -- # set +x 00:41:44.907 ************************************ 00:41:44.907 START TEST nvmf_identify_passthru 00:41:44.907 ************************************ 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:44.907 * Looking for test storage... 00:41:44.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:44.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.907 --rc genhtml_branch_coverage=1 00:41:44.907 --rc genhtml_function_coverage=1 00:41:44.907 --rc genhtml_legend=1 00:41:44.907 --rc geninfo_all_blocks=1 00:41:44.907 --rc geninfo_unexecuted_blocks=1 00:41:44.907 00:41:44.907 ' 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:44.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.907 --rc genhtml_branch_coverage=1 00:41:44.907 --rc genhtml_function_coverage=1 00:41:44.907 --rc genhtml_legend=1 00:41:44.907 --rc geninfo_all_blocks=1 00:41:44.907 --rc geninfo_unexecuted_blocks=1 00:41:44.907 00:41:44.907 ' 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:44.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.907 --rc genhtml_branch_coverage=1 00:41:44.907 --rc genhtml_function_coverage=1 00:41:44.907 --rc genhtml_legend=1 00:41:44.907 --rc geninfo_all_blocks=1 00:41:44.907 --rc geninfo_unexecuted_blocks=1 00:41:44.907 00:41:44.907 ' 00:41:44.907 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:44.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.907 --rc genhtml_branch_coverage=1 00:41:44.907 --rc genhtml_function_coverage=1 00:41:44.907 --rc genhtml_legend=1 00:41:44.907 --rc geninfo_all_blocks=1 00:41:44.907 --rc geninfo_unexecuted_blocks=1 00:41:44.907 00:41:44.907 ' 00:41:44.907 00:23:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:44.907 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.907 00:23:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.907 00:23:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.907 00:23:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:44.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:44.908 00:23:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.908 00:23:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.908 00:23:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.908 00:23:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.908 00:23:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:44.908 00:23:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.908 00:23:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:44.908 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:44.908 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:44.908 00:23:28 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:44.908 00:23:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:51.482 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:51.482 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:51.482 Found net devices under 0000:af:00.0: cvl_0_0 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:51.482 Found net devices under 0000:af:00.1: cvl_0_1 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:51.482 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:51.483 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:51.742 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:51.742 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:51.742 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:51.742 00:23:35 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:51.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:51.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:41:51.742 00:41:51.742 --- 10.0.0.2 ping statistics --- 00:41:51.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.742 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:51.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:51.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:41:51.742 00:41:51.742 --- 10.0.0.1 ping statistics --- 00:41:51.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:51.742 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:51.742 00:23:36 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:51.742 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:51.742 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:51.742 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:52.001 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:52.001 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:41:52.001 00:23:36 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:41:52.001 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:41:52.001 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:41:52.001 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:41:52.001 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:52.001 00:23:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:57.277 00:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:41:57.277 00:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:57.277 00:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:41:57.277 00:23:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=690360 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:01.472 00:23:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 690360 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 690360 ']' 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:01.472 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:01.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:01.473 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:01.473 00:23:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:01.473 [2024-12-10 00:23:45.883364] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:42:01.473 [2024-12-10 00:23:45.883416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:01.732 [2024-12-10 00:23:45.978848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:01.732 [2024-12-10 00:23:46.020047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:01.732 [2024-12-10 00:23:46.020087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:01.732 [2024-12-10 00:23:46.020097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:01.732 [2024-12-10 00:23:46.020106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:01.732 [2024-12-10 00:23:46.020113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:01.732 [2024-12-10 00:23:46.021783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.732 [2024-12-10 00:23:46.021894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:01.732 [2024-12-10 00:23:46.021935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.732 [2024-12-10 00:23:46.021936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:02.301 00:23:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:02.301 INFO: Log level set to 20 00:42:02.301 INFO: Requests: 00:42:02.301 { 00:42:02.301 "jsonrpc": "2.0", 00:42:02.301 "method": "nvmf_set_config", 00:42:02.301 "id": 1, 00:42:02.301 "params": { 00:42:02.301 "admin_cmd_passthru": { 00:42:02.301 "identify_ctrlr": true 00:42:02.301 } 00:42:02.301 } 00:42:02.301 } 00:42:02.301 00:42:02.301 INFO: response: 00:42:02.301 { 00:42:02.301 "jsonrpc": "2.0", 00:42:02.301 "id": 1, 00:42:02.301 "result": true 00:42:02.301 } 00:42:02.301 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.301 00:23:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.301 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:02.301 INFO: Setting log level to 20 00:42:02.301 INFO: Setting log level to 20 00:42:02.301 INFO: Log level set to 20 00:42:02.301 INFO: Log level set to 20 00:42:02.301 INFO: Requests: 00:42:02.301 { 00:42:02.301 "jsonrpc": "2.0", 00:42:02.301 "method": "framework_start_init", 00:42:02.301 "id": 1 00:42:02.301 } 00:42:02.301 00:42:02.301 INFO: Requests: 00:42:02.301 { 00:42:02.301 "jsonrpc": "2.0", 00:42:02.301 "method": "framework_start_init", 00:42:02.301 "id": 1 00:42:02.301 } 00:42:02.301 00:42:02.561 [2024-12-10 00:23:46.812990] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:02.561 INFO: response: 00:42:02.561 { 00:42:02.561 "jsonrpc": "2.0", 00:42:02.561 "id": 1, 00:42:02.561 "result": true 00:42:02.561 } 00:42:02.561 00:42:02.561 INFO: response: 00:42:02.561 { 00:42:02.561 "jsonrpc": "2.0", 00:42:02.561 "id": 1, 00:42:02.561 "result": true 00:42:02.561 } 00:42:02.561 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.561 00:23:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:02.561 INFO: Setting log level to 40 00:42:02.561 INFO: Setting log level to 40 00:42:02.561 INFO: Setting log level to 40 00:42:02.561 [2024-12-10 00:23:46.826320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.561 00:23:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:02.561 00:23:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.561 00:23:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.852 Nvme0n1 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.852 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.852 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.852 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.852 [2024-12-10 00:23:49.775699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.852 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.852 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.852 [ 00:42:05.852 { 00:42:05.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:05.852 "subtype": "Discovery", 00:42:05.852 "listen_addresses": [], 00:42:05.852 "allow_any_host": true, 00:42:05.852 "hosts": [] 00:42:05.852 }, 00:42:05.852 { 00:42:05.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:05.852 "subtype": "NVMe", 00:42:05.852 "listen_addresses": [ 00:42:05.852 { 00:42:05.852 "trtype": "TCP", 00:42:05.853 "adrfam": "IPv4", 00:42:05.853 "traddr": "10.0.0.2", 00:42:05.853 "trsvcid": "4420" 00:42:05.853 } 00:42:05.853 ], 00:42:05.853 "allow_any_host": true, 00:42:05.853 "hosts": [], 00:42:05.853 "serial_number": "SPDK00000000000001", 00:42:05.853 "model_number": "SPDK bdev Controller", 00:42:05.853 "max_namespaces": 1, 00:42:05.853 "min_cntlid": 1, 00:42:05.853 "max_cntlid": 65519, 00:42:05.853 "namespaces": [ 00:42:05.853 { 00:42:05.853 "nsid": 1, 00:42:05.853 "bdev_name": "Nvme0n1", 00:42:05.853 "name": "Nvme0n1", 00:42:05.853 "nguid": "32834E381B2A41C1AEB56037CDFB2232", 00:42:05.853 "uuid": "32834e38-1b2a-41c1-aeb5-6037cdfb2232" 00:42:05.853 } 00:42:05.853 ] 00:42:05.853 } 00:42:05.853 ] 00:42:05.853 00:23:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.853 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:05.853 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:05.853 00:23:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:05.853 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:42:05.853 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:05.853 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:05.853 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:06.112 00:23:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:06.112 rmmod nvme_tcp 00:42:06.112 rmmod nvme_fabrics 00:42:06.112 rmmod nvme_keyring 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 690360 ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 690360 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 690360 ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 690360 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690360 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690360' 00:42:06.112 killing process with pid 690360 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 690360 00:42:06.112 00:23:50 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 690360 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:08.649 00:23:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.649 00:23:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:08.649 00:23:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.555 00:23:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:10.555 00:42:10.555 real 0m25.981s 00:42:10.555 user 0m34.080s 00:42:10.555 sys 0m7.720s 00:42:10.555 00:23:54 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:10.555 00:23:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:10.555 ************************************ 00:42:10.555 END TEST nvmf_identify_passthru 00:42:10.555 ************************************ 00:42:10.555 00:23:54 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:10.555 00:23:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:10.555 00:23:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:10.555 00:23:54 -- common/autotest_common.sh@10 -- # set +x 00:42:10.555 ************************************ 00:42:10.555 START TEST nvmf_dif 00:42:10.555 ************************************ 00:42:10.555 00:23:54 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:10.555 * Looking for test storage... 00:42:10.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:10.555 00:23:54 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:10.555 00:23:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:10.555 00:23:54 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:10.555 00:23:54 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:10.555 00:23:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.556 --rc genhtml_branch_coverage=1 00:42:10.556 --rc genhtml_function_coverage=1 00:42:10.556 --rc genhtml_legend=1 00:42:10.556 --rc geninfo_all_blocks=1 00:42:10.556 --rc geninfo_unexecuted_blocks=1 00:42:10.556 00:42:10.556 ' 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.556 --rc genhtml_branch_coverage=1 00:42:10.556 --rc genhtml_function_coverage=1 00:42:10.556 --rc genhtml_legend=1 00:42:10.556 --rc geninfo_all_blocks=1 00:42:10.556 --rc geninfo_unexecuted_blocks=1 00:42:10.556 00:42:10.556 ' 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.556 --rc genhtml_branch_coverage=1 00:42:10.556 --rc genhtml_function_coverage=1 00:42:10.556 --rc genhtml_legend=1 00:42:10.556 --rc geninfo_all_blocks=1 00:42:10.556 --rc geninfo_unexecuted_blocks=1 00:42:10.556 00:42:10.556 ' 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:10.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:10.556 --rc genhtml_branch_coverage=1 00:42:10.556 --rc genhtml_function_coverage=1 00:42:10.556 --rc genhtml_legend=1 00:42:10.556 --rc geninfo_all_blocks=1 00:42:10.556 --rc geninfo_unexecuted_blocks=1 00:42:10.556 00:42:10.556 ' 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:10.556 00:23:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:10.556 00:23:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.556 00:23:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.556 00:23:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.556 00:23:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:10.556 00:23:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:10.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:10.556 00:23:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:10.556 00:23:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:10.556 00:23:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:18.692 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:18.692 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:18.692 Found net devices under 0000:af:00.0: cvl_0_0 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:18.692 Found net devices under 0000:af:00.1: cvl_0_1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:18.692 00:24:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:18.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:18.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:42:18.692 00:42:18.692 --- 10.0.0.2 ping statistics --- 00:42:18.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.692 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:18.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:18.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:42:18.692 00:42:18.692 --- 10.0.0.1 ping statistics --- 00:42:18.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:18.692 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:18.692 00:24:02 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:21.230 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:21.230 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:21.230 00:24:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:21.230 00:24:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:21.230 00:24:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.231 00:24:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=696907 00:42:21.231 00:24:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:21.231 00:24:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 696907 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 696907 ']' 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:21.231 00:24:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.231 [2024-12-10 00:24:05.687257] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:42:21.231 [2024-12-10 00:24:05.687310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:21.490 [2024-12-10 00:24:05.782836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.490 [2024-12-10 00:24:05.822579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:21.490 [2024-12-10 00:24:05.822618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:21.490 [2024-12-10 00:24:05.822628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:21.490 [2024-12-10 00:24:05.822636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:21.490 [2024-12-10 00:24:05.822643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:21.490 [2024-12-10 00:24:05.823238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.059 00:24:06 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:22.059 00:24:06 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:22.059 00:24:06 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:22.059 00:24:06 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:22.059 00:24:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 00:24:06 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:22.318 00:24:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:22.318 00:24:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 [2024-12-10 00:24:06.568450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.318 00:24:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 ************************************ 00:42:22.318 START TEST fio_dif_1_default 00:42:22.318 ************************************ 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 bdev_null0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:22.318 [2024-12-10 00:24:06.644773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:22.318 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:22.319 { 00:42:22.319 "params": { 00:42:22.319 "name": "Nvme$subsystem", 00:42:22.319 "trtype": "$TEST_TRANSPORT", 00:42:22.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:22.319 "adrfam": "ipv4", 00:42:22.319 "trsvcid": "$NVMF_PORT", 00:42:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:22.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:22.319 "hdgst": ${hdgst:-false}, 00:42:22.319 "ddgst": ${ddgst:-false} 00:42:22.319 }, 00:42:22.319 "method": "bdev_nvme_attach_controller" 00:42:22.319 } 00:42:22.319 EOF 00:42:22.319 )") 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:22.319 "params": { 00:42:22.319 "name": "Nvme0", 00:42:22.319 "trtype": "tcp", 00:42:22.319 "traddr": "10.0.0.2", 00:42:22.319 "adrfam": "ipv4", 00:42:22.319 "trsvcid": "4420", 00:42:22.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:22.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:22.319 "hdgst": false, 00:42:22.319 "ddgst": false 00:42:22.319 }, 00:42:22.319 "method": "bdev_nvme_attach_controller" 00:42:22.319 }' 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:22.319 00:24:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.577 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:22.577 fio-3.35 00:42:22.577 Starting 1 thread 00:42:34.790 00:42:34.790 filename0: (groupid=0, jobs=1): err= 0: pid=697414: Tue Dec 10 00:24:17 2024 00:42:34.790 read: IOPS=200, BW=801KiB/s (820kB/s)(8032KiB/10027msec) 00:42:34.790 slat (nsec): min=5682, max=31525, avg=5936.37, stdev=797.46 00:42:34.790 clat (usec): min=377, max=42563, avg=19956.66, stdev=20401.45 00:42:34.790 lat (usec): min=382, max=42569, avg=19962.59, stdev=20401.41 00:42:34.790 clat percentiles (usec): 00:42:34.790 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 412], 00:42:34.790 | 30.00th=[ 420], 40.00th=[ 461], 50.00th=[ 562], 60.00th=[40633], 00:42:34.790 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:42:34.790 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:34.790 | 99.99th=[42730] 00:42:34.790 bw ( KiB/s): min= 736, max= 896, per=100.00%, avg=801.60, stdev=45.82, samples=20 00:42:34.790 iops : min= 184, max= 224, avg=200.40, stdev=11.45, samples=20 00:42:34.790 lat (usec) : 500=44.52%, 750=7.67% 00:42:34.790 lat (msec) : 50=47.81% 00:42:34.790 cpu : usr=86.48%, sys=13.30%, ctx=8, majf=0, minf=0 00:42:34.790 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:34.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.790 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.790 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:34.790 00:42:34.790 Run status group 0 (all jobs): 00:42:34.790 READ: bw=801KiB/s (820kB/s), 801KiB/s-801KiB/s (820kB/s-820kB/s), io=8032KiB (8225kB), run=10027-10027msec 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.790 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.790 00:42:34.790 real 0m11.450s 00:42:34.790 user 0m17.772s 00:42:34.790 sys 0m1.762s 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 ************************************ 00:42:34.791 END TEST fio_dif_1_default 00:42:34.791 ************************************ 00:42:34.791 00:24:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:34.791 00:24:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:34.791 00:24:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 ************************************ 00:42:34.791 START TEST fio_dif_1_multi_subsystems 00:42:34.791 ************************************ 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 bdev_null0 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 [2024-12-10 00:24:18.187842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 bdev_null1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:34.791 { 00:42:34.791 "params": { 00:42:34.791 "name": "Nvme$subsystem", 00:42:34.791 "trtype": "$TEST_TRANSPORT", 00:42:34.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:34.791 "adrfam": "ipv4", 00:42:34.791 "trsvcid": "$NVMF_PORT", 00:42:34.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:34.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:34.791 "hdgst": ${hdgst:-false}, 00:42:34.791 "ddgst": ${ddgst:-false} 00:42:34.791 }, 00:42:34.791 "method": "bdev_nvme_attach_controller" 00:42:34.791 } 00:42:34.791 EOF 00:42:34.791 )") 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:34.791 { 00:42:34.791 "params": { 00:42:34.791 "name": "Nvme$subsystem", 00:42:34.791 "trtype": "$TEST_TRANSPORT", 00:42:34.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:34.791 "adrfam": "ipv4", 00:42:34.791 "trsvcid": "$NVMF_PORT", 00:42:34.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:34.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:34.791 "hdgst": ${hdgst:-false}, 00:42:34.791 "ddgst": ${ddgst:-false} 00:42:34.791 }, 00:42:34.791 "method": "bdev_nvme_attach_controller" 00:42:34.791 } 00:42:34.791 EOF 00:42:34.791 )") 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:34.791 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:34.791 "params": { 00:42:34.791 "name": "Nvme0", 00:42:34.791 "trtype": "tcp", 00:42:34.791 "traddr": "10.0.0.2", 00:42:34.791 "adrfam": "ipv4", 00:42:34.791 "trsvcid": "4420", 00:42:34.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.791 "hdgst": false, 00:42:34.791 "ddgst": false 00:42:34.791 }, 00:42:34.791 "method": "bdev_nvme_attach_controller" 00:42:34.791 },{ 00:42:34.791 "params": { 00:42:34.791 "name": "Nvme1", 00:42:34.791 "trtype": "tcp", 00:42:34.791 "traddr": "10.0.0.2", 00:42:34.791 "adrfam": "ipv4", 00:42:34.791 "trsvcid": "4420", 00:42:34.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:34.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:34.792 "hdgst": false, 00:42:34.792 "ddgst": false 00:42:34.792 }, 00:42:34.792 "method": "bdev_nvme_attach_controller" 00:42:34.792 }' 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:34.792 00:24:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.792 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:34.792 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:34.792 fio-3.35 00:42:34.792 Starting 2 threads 00:42:47.003 00:42:47.003 filename0: (groupid=0, jobs=1): err= 0: pid=699409: Tue Dec 10 00:24:29 2024 00:42:47.003 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10006msec) 00:42:47.003 slat (nsec): min=5773, max=28668, avg=7463.97, stdev=2589.44 00:42:47.003 clat (usec): min=378, max=41999, avg=40817.33, stdev=2591.37 00:42:47.003 lat (usec): min=384, max=42010, avg=40824.80, stdev=2591.39 00:42:47.003 clat percentiles (usec): 00:42:47.003 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:47.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:47.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:47.003 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:42:47.003 | 99.99th=[42206] 00:42:47.003 bw ( KiB/s): min= 384, max= 416, per=33.46%, avg=390.40, stdev=13.13, samples=20 00:42:47.003 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:42:47.003 lat (usec) : 500=0.41% 00:42:47.003 lat (msec) : 50=99.59% 00:42:47.003 cpu : usr=93.41%, sys=6.35%, ctx=9, majf=0, minf=104 00:42:47.003 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:47.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.003 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:47.003 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:47.003 filename1: (groupid=0, jobs=1): err= 0: pid=699410: Tue Dec 10 00:24:29 2024 00:42:47.003 read: IOPS=193, BW=774KiB/s (792kB/s)(7744KiB/10007msec) 00:42:47.003 slat (nsec): min=5782, max=29372, avg=6801.79, stdev=1937.24 00:42:47.003 clat (usec): min=392, max=42558, avg=20655.79, stdev=20461.83 00:42:47.003 lat (usec): min=399, max=42565, avg=20662.59, stdev=20461.24 00:42:47.003 clat percentiles (usec): 00:42:47.003 | 1.00th=[ 404], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 424], 00:42:47.003 | 30.00th=[ 433], 40.00th=[ 469], 50.00th=[ 635], 60.00th=[40633], 00:42:47.003 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:47.003 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:47.003 | 99.99th=[42730] 00:42:47.003 bw ( KiB/s): min= 672, max= 832, per=66.23%, avg=772.80, stdev=39.23, samples=20 00:42:47.003 iops : min= 168, max= 208, avg=193.20, stdev= 9.81, samples=20 00:42:47.003 lat (usec) : 500=43.29%, 750=7.33% 00:42:47.003 lat (msec) : 50=49.38% 00:42:47.003 cpu : usr=93.79%, sys=5.97%, ctx=12, majf=0, minf=73 00:42:47.003 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:47.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.003 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:47.003 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:47.003 00:42:47.003 Run status group 0 (all jobs): 00:42:47.003 READ: bw=1166KiB/s (1194kB/s), 392KiB/s-774KiB/s (401kB/s-792kB/s), io=11.4MiB (11.9MB), run=10006-10007msec 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.003 00:42:47.003 real 0m11.443s 00:42:47.003 user 0m28.389s 00:42:47.003 sys 0m1.632s 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 ************************************ 00:42:47.003 END TEST fio_dif_1_multi_subsystems 00:42:47.003 ************************************ 00:42:47.003 00:24:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:47.003 00:24:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.003 00:24:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.003 00:24:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:47.003 ************************************ 00:42:47.003 START TEST fio_dif_rand_params 00:42:47.003 ************************************ 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:47.003 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:47.004 bdev_null0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:47.004 [2024-12-10 00:24:29.709675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:47.004 { 00:42:47.004 "params": { 00:42:47.004 "name": "Nvme$subsystem", 00:42:47.004 "trtype": "$TEST_TRANSPORT", 00:42:47.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:47.004 "adrfam": "ipv4", 00:42:47.004 "trsvcid": "$NVMF_PORT", 00:42:47.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:47.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:47.004 "hdgst": ${hdgst:-false}, 00:42:47.004 "ddgst": ${ddgst:-false} 00:42:47.004 }, 00:42:47.004 "method": "bdev_nvme_attach_controller" 00:42:47.004 } 00:42:47.004 EOF 00:42:47.004 )") 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:47.004 "params": { 00:42:47.004 "name": "Nvme0", 00:42:47.004 "trtype": "tcp", 00:42:47.004 "traddr": "10.0.0.2", 00:42:47.004 "adrfam": "ipv4", 00:42:47.004 "trsvcid": "4420", 00:42:47.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.004 "hdgst": false, 00:42:47.004 "ddgst": false 00:42:47.004 }, 00:42:47.004 "method": "bdev_nvme_attach_controller" 00:42:47.004 }' 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:47.004 00:24:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:47.004 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:47.004 ... 00:42:47.004 fio-3.35 00:42:47.004 Starting 3 threads 00:42:52.382 00:42:52.382 filename0: (groupid=0, jobs=1): err= 0: pid=701401: Tue Dec 10 00:24:35 2024 00:42:52.382 read: IOPS=319, BW=40.0MiB/s (41.9MB/s)(202MiB/5044msec) 00:42:52.382 slat (nsec): min=6033, max=45722, avg=10612.34, stdev=2293.78 00:42:52.382 clat (usec): min=3467, max=51116, avg=9342.99, stdev=5870.01 00:42:52.382 lat (usec): min=3474, max=51128, avg=9353.60, stdev=5870.16 00:42:52.382 clat percentiles (usec): 00:42:52.382 | 1.00th=[ 3687], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 7635], 00:42:52.382 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:42:52.382 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10683], 00:42:52.382 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[51119], 00:42:52.382 | 99.99th=[51119] 00:42:52.382 bw ( KiB/s): min=18432, max=51200, per=34.29%, avg=41241.60, stdev=8601.71, samples=10 00:42:52.382 iops : min= 144, max= 400, avg=322.20, stdev=67.20, samples=10 00:42:52.382 lat (msec) : 4=1.74%, 10=85.18%, 20=10.91%, 50=1.92%, 100=0.25% 00:42:52.382 cpu : usr=91.08%, sys=8.65%, ctx=11, majf=0, minf=9 00:42:52.382 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 issued rwts: total=1613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.382 filename0: (groupid=0, jobs=1): err= 0: pid=701402: Tue Dec 10 00:24:35 2024 00:42:52.382 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(202MiB/5003msec) 00:42:52.382 slat (nsec): min=6017, max=27418, avg=10407.88, stdev=2207.10 00:42:52.382 clat (usec): min=3376, max=50721, avg=9262.33, stdev=4633.88 00:42:52.382 lat (usec): min=3385, max=50732, avg=9272.74, stdev=4634.38 00:42:52.382 clat percentiles (usec): 00:42:52.382 | 1.00th=[ 3589], 5.00th=[ 3687], 10.00th=[ 5604], 20.00th=[ 7635], 00:42:52.382 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:42:52.382 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11076], 95.00th=[11600], 00:42:52.382 | 99.00th=[44303], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:42:52.382 | 99.99th=[50594] 00:42:52.382 bw ( KiB/s): min=27392, max=65024, per=34.76%, avg=41813.33, stdev=9855.17, samples=9 00:42:52.382 iops : min= 214, max= 508, avg=326.67, stdev=76.99, samples=9 00:42:52.382 lat (msec) : 4=7.05%, 10=61.31%, 20=30.53%, 50=0.93%, 100=0.19% 00:42:52.382 cpu : usr=91.92%, sys=7.78%, ctx=9, majf=0, minf=9 00:42:52.382 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 issued rwts: total=1618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.382 filename0: (groupid=0, jobs=1): err= 0: pid=701403: Tue Dec 10 00:24:35 2024 00:42:52.382 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(189MiB/5043msec) 00:42:52.382 slat (nsec): min=6037, max=26364, avg=10924.76, stdev=2017.58 00:42:52.382 clat (usec): min=5171, max=49986, avg=9984.95, stdev=5593.78 00:42:52.382 lat (usec): min=5179, max=49997, avg=9995.88, stdev=5593.81 00:42:52.382 clat percentiles (usec): 00:42:52.382 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8160], 00:42:52.382 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:42:52.382 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:42:52.382 | 99.00th=[46400], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:42:52.382 | 99.99th=[50070] 00:42:52.382 bw ( KiB/s): min=20992, max=43520, per=32.07%, avg=38579.20, stdev=6532.38, samples=10 00:42:52.382 iops : min= 164, max= 340, avg=301.40, stdev=51.03, samples=10 00:42:52.382 lat (msec) : 10=68.92%, 20=28.96%, 50=2.12% 00:42:52.382 cpu : usr=91.51%, sys=8.11%, ctx=6, majf=0, minf=9 00:42:52.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.382 issued rwts: total=1509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.382 00:42:52.382 Run status group 0 (all jobs): 00:42:52.382 READ: bw=117MiB/s (123MB/s), 37.4MiB/s-40.4MiB/s (39.2MB/s-42.4MB/s), io=593MiB (621MB), run=5003-5044msec 00:42:52.382 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:52.382 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:52.382 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 bdev_null0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 [2024-12-10 00:24:36.174117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 bdev_null1 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 bdev_null2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.383 { 00:42:52.383 "params": { 00:42:52.383 "name": "Nvme$subsystem", 00:42:52.383 "trtype": "$TEST_TRANSPORT", 00:42:52.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.383 "adrfam": "ipv4", 00:42:52.383 "trsvcid": "$NVMF_PORT", 00:42:52.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.383 "hdgst": ${hdgst:-false}, 00:42:52.383 "ddgst": ${ddgst:-false} 00:42:52.383 }, 00:42:52.383 "method": "bdev_nvme_attach_controller" 00:42:52.383 } 00:42:52.383 EOF 00:42:52.383 )") 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.383 { 00:42:52.383 "params": { 00:42:52.383 "name": "Nvme$subsystem", 00:42:52.383 "trtype": "$TEST_TRANSPORT", 00:42:52.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.383 "adrfam": "ipv4", 00:42:52.383 "trsvcid": "$NVMF_PORT", 00:42:52.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.383 "hdgst": ${hdgst:-false}, 00:42:52.383 "ddgst": ${ddgst:-false} 00:42:52.383 }, 00:42:52.383 "method": "bdev_nvme_attach_controller" 00:42:52.383 } 00:42:52.383 EOF 00:42:52.383 )") 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:52.383 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.384 { 00:42:52.384 "params": { 00:42:52.384 "name": "Nvme$subsystem", 00:42:52.384 "trtype": "$TEST_TRANSPORT", 00:42:52.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.384 "adrfam": "ipv4", 00:42:52.384 "trsvcid": "$NVMF_PORT", 00:42:52.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.384 "hdgst": ${hdgst:-false}, 00:42:52.384 "ddgst": ${ddgst:-false} 00:42:52.384 }, 00:42:52.384 "method": "bdev_nvme_attach_controller" 00:42:52.384 } 00:42:52.384 EOF 00:42:52.384 )") 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:52.384 "params": { 00:42:52.384 "name": "Nvme0", 00:42:52.384 "trtype": "tcp", 00:42:52.384 "traddr": "10.0.0.2", 00:42:52.384 "adrfam": "ipv4", 00:42:52.384 "trsvcid": "4420", 00:42:52.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:52.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:52.384 "hdgst": false, 00:42:52.384 "ddgst": false 00:42:52.384 }, 00:42:52.384 "method": "bdev_nvme_attach_controller" 00:42:52.384 },{ 00:42:52.384 "params": { 00:42:52.384 "name": "Nvme1", 00:42:52.384 "trtype": "tcp", 00:42:52.384 "traddr": "10.0.0.2", 00:42:52.384 "adrfam": "ipv4", 00:42:52.384 "trsvcid": "4420", 00:42:52.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:52.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:52.384 "hdgst": false, 00:42:52.384 "ddgst": false 00:42:52.384 }, 00:42:52.384 "method": "bdev_nvme_attach_controller" 00:42:52.384 },{ 00:42:52.384 "params": { 00:42:52.384 "name": "Nvme2", 00:42:52.384 "trtype": "tcp", 00:42:52.384 "traddr": "10.0.0.2", 00:42:52.384 "adrfam": "ipv4", 00:42:52.384 "trsvcid": "4420", 00:42:52.384 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:52.384 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:52.384 "hdgst": false, 00:42:52.384 "ddgst": false 00:42:52.384 }, 00:42:52.384 "method": "bdev_nvme_attach_controller" 00:42:52.384 }' 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:52.384 00:24:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.384 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.384 ... 00:42:52.384 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.384 ... 00:42:52.384 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.384 ... 00:42:52.384 fio-3.35 00:42:52.384 Starting 24 threads 00:43:04.580 00:43:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=702599: Tue Dec 10 00:24:47 2024 00:43:04.580 read: IOPS=90, BW=361KiB/s (369kB/s)(3648KiB/10118msec) 00:43:04.580 slat (nsec): min=6349, max=43532, avg=9230.62, stdev=4768.90 00:43:04.580 clat (usec): min=1630, max=233748, avg=177415.63, stdev=59736.00 00:43:04.580 lat (usec): min=1651, max=233761, avg=177424.86, stdev=59732.88 00:43:04.580 clat percentiles (usec): 00:43:04.580 | 1.00th=[ 1827], 5.00th=[ 3326], 10.00th=[ 85459], 20.00th=[173016], 00:43:04.580 | 30.00th=[177210], 40.00th=[179307], 50.00th=[183501], 60.00th=[198181], 00:43:04.580 | 70.00th=[202376], 80.00th=[225444], 90.00th=[229639], 95.00th=[231736], 00:43:04.580 | 99.00th=[233833], 99.50th=[233833], 99.90th=[233833], 99.95th=[233833], 00:43:04.580 | 99.99th=[233833] 00:43:04.580 bw ( KiB/s): min= 255, max= 1024, per=5.17%, avg=358.30, stdev=169.22, samples=20 00:43:04.580 iops : min= 63, max= 256, avg=89.50, stdev=42.33, samples=20 00:43:04.580 lat (msec) : 2=1.75%, 4=3.51%, 10=1.75%, 50=1.75%, 100=1.75% 00:43:04.580 lat (msec) : 250=89.47% 00:43:04.580 cpu : usr=97.75%, sys=1.91%, ctx=62, majf=0, minf=9 00:43:04.580 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=702600: Tue Dec 10 00:24:47 2024 00:43:04.580 read: IOPS=74, BW=296KiB/s (303kB/s)(2984KiB/10079msec) 00:43:04.580 slat (nsec): min=4344, max=19499, avg=7877.34, stdev=2101.34 00:43:04.580 clat (msec): min=162, max=344, avg=215.71, stdev=44.47 00:43:04.580 lat (msec): min=162, max=344, avg=215.72, stdev=44.47 00:43:04.580 clat percentiles (msec): 00:43:04.580 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:43:04.580 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 211], 60.00th=[ 218], 00:43:04.580 | 70.00th=[ 224], 80.00th=[ 232], 90.00th=[ 275], 95.00th=[ 326], 00:43:04.580 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:43:04.580 | 99.99th=[ 347] 00:43:04.580 bw ( KiB/s): min= 128, max= 384, per=4.20%, avg=291.90, stdev=64.44, samples=20 00:43:04.580 iops : min= 32, max= 96, avg=72.90, stdev=16.15, samples=20 00:43:04.580 lat (msec) : 250=82.04%, 500=17.96% 00:43:04.580 cpu : usr=97.77%, sys=1.89%, ctx=12, majf=0, minf=9 00:43:04.580 IO depths : 1=0.5%, 2=1.7%, 4=8.7%, 8=76.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:43:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 issued rwts: total=746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=702601: Tue Dec 10 00:24:47 2024 00:43:04.580 read: IOPS=72, BW=290KiB/s (297kB/s)(2920KiB/10068msec) 00:43:04.580 slat (nsec): min=6321, max=25116, avg=7880.69, stdev=2115.64 00:43:04.580 clat (msec): min=157, max=440, avg=220.22, stdev=54.63 00:43:04.580 lat (msec): min=157, max=440, avg=220.23, stdev=54.63 00:43:04.580 clat percentiles (msec): 00:43:04.580 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 182], 00:43:04.580 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 213], 60.00th=[ 222], 00:43:04.580 | 70.00th=[ 224], 80.00th=[ 232], 90.00th=[ 309], 95.00th=[ 338], 00:43:04.580 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:43:04.580 | 99.99th=[ 439] 00:43:04.580 bw ( KiB/s): min= 128, max= 384, per=4.12%, avg=285.50, stdev=62.27, samples=20 00:43:04.580 iops : min= 32, max= 96, avg=71.30, stdev=15.51, samples=20 00:43:04.580 lat (msec) : 250=81.64%, 500=18.36% 00:43:04.580 cpu : usr=97.76%, sys=1.90%, ctx=16, majf=0, minf=9 00:43:04.580 IO depths : 1=0.3%, 2=1.2%, 4=7.9%, 8=77.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:43:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 complete : 0=0.0%, 4=89.0%, 8=6.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=702602: Tue Dec 10 00:24:47 2024 00:43:04.580 read: IOPS=74, BW=297KiB/s (304kB/s)(2992KiB/10088msec) 00:43:04.580 slat (nsec): min=6237, max=25417, avg=7937.08, stdev=2388.85 00:43:04.580 clat (msec): min=153, max=343, avg=215.32, stdev=45.92 00:43:04.580 lat (msec): min=153, max=343, avg=215.33, stdev=45.92 00:43:04.580 clat percentiles (msec): 00:43:04.580 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 178], 20.00th=[ 180], 00:43:04.580 | 30.00th=[ 184], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 211], 00:43:04.580 | 70.00th=[ 220], 80.00th=[ 245], 90.00th=[ 275], 95.00th=[ 330], 00:43:04.580 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:43:04.580 | 99.99th=[ 342] 00:43:04.580 bw ( KiB/s): min= 176, max= 384, per=4.22%, avg=292.70, stdev=61.05, samples=20 00:43:04.580 iops : min= 44, max= 96, avg=73.10, stdev=15.30, samples=20 00:43:04.580 lat (msec) : 250=83.96%, 500=16.04% 00:43:04.580 cpu : usr=97.53%, sys=2.13%, ctx=13, majf=0, minf=9 00:43:04.580 IO depths : 1=0.1%, 2=0.4%, 4=5.7%, 8=80.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:43:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 complete : 0=0.0%, 4=88.3%, 8=7.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.580 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=702603: Tue Dec 10 00:24:47 2024 00:43:04.580 read: IOPS=80, BW=321KiB/s (328kB/s)(3240KiB/10101msec) 00:43:04.580 slat (nsec): min=6345, max=32703, avg=8036.49, stdev=2583.04 00:43:04.580 clat (msec): min=110, max=337, avg=198.43, stdev=34.98 00:43:04.580 lat (msec): min=110, max=337, avg=198.44, stdev=34.98 00:43:04.580 clat percentiles (msec): 00:43:04.580 | 1.00th=[ 111], 5.00th=[ 142], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.580 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 211], 00:43:04.580 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 253], 00:43:04.580 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:43:04.580 | 99.99th=[ 338] 00:43:04.580 bw ( KiB/s): min= 224, max= 384, per=4.58%, avg=317.60, stdev=50.93, samples=20 00:43:04.580 iops : min= 56, max= 96, avg=79.40, stdev=12.73, samples=20 00:43:04.580 lat (msec) : 250=94.07%, 500=5.93% 00:43:04.580 cpu : usr=97.74%, sys=1.91%, ctx=12, majf=0, minf=9 00:43:04.580 IO depths : 1=0.4%, 2=1.1%, 4=8.1%, 8=78.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename0: (groupid=0, jobs=1): err= 0: pid=702604: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=77, BW=310KiB/s (317kB/s)(3120KiB/10068msec) 00:43:04.581 slat (nsec): min=6306, max=24927, avg=7921.21, stdev=2453.63 00:43:04.581 clat (msec): min=160, max=439, avg=205.97, stdev=40.71 00:43:04.581 lat (msec): min=160, max=439, avg=205.98, stdev=40.71 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 211], 60.00th=[ 213], 00:43:04.581 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 230], 95.00th=[ 232], 00:43:04.581 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:43:04.581 | 99.99th=[ 439] 00:43:04.581 bw ( KiB/s): min= 128, max= 384, per=4.43%, avg=307.90, stdev=60.88, samples=20 00:43:04.581 iops : min= 32, max= 96, avg=76.90, stdev=15.17, samples=20 00:43:04.581 lat (msec) : 250=95.64%, 500=4.36% 00:43:04.581 cpu : usr=97.74%, sys=1.93%, ctx=11, majf=0, minf=9 00:43:04.581 IO depths : 1=0.5%, 2=1.2%, 4=8.1%, 8=78.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename0: (groupid=0, jobs=1): err= 0: pid=702605: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10071msec) 00:43:04.581 slat (nsec): min=6541, max=33261, avg=15497.95, stdev=4641.35 00:43:04.581 clat (msec): min=152, max=439, avg=296.06, stdev=58.57 00:43:04.581 lat (msec): min=152, max=439, avg=296.07, stdev=58.57 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 153], 5.00th=[ 182], 10.00th=[ 232], 20.00th=[ 262], 00:43:04.581 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 292], 60.00th=[ 317], 00:43:04.581 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 426], 00:43:04.581 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:43:04.581 | 99.99th=[ 439] 00:43:04.581 bw ( KiB/s): min= 128, max= 256, per=3.05%, avg=211.10, stdev=57.97, samples=20 00:43:04.581 iops : min= 32, max= 64, avg=52.70, stdev=14.51, samples=20 00:43:04.581 lat (msec) : 250=11.03%, 500=88.97% 00:43:04.581 cpu : usr=97.93%, sys=1.72%, ctx=24, majf=0, minf=9 00:43:04.581 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename0: (groupid=0, jobs=1): err= 0: pid=702606: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=82, BW=329KiB/s (337kB/s)(3328KiB/10101msec) 00:43:04.581 slat (nsec): min=6351, max=25323, avg=8058.73, stdev=2545.14 00:43:04.581 clat (msec): min=110, max=233, avg=194.17, stdev=28.12 00:43:04.581 lat (msec): min=110, max=233, avg=194.17, stdev=28.12 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 111], 5.00th=[ 150], 10.00th=[ 174], 20.00th=[ 176], 00:43:04.581 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 201], 00:43:04.581 | 70.00th=[ 220], 80.00th=[ 226], 90.00th=[ 230], 95.00th=[ 232], 00:43:04.581 | 99.00th=[ 234], 99.50th=[ 234], 99.90th=[ 234], 99.95th=[ 234], 00:43:04.581 | 99.99th=[ 234] 00:43:04.581 bw ( KiB/s): min= 256, max= 384, per=4.71%, avg=326.40, stdev=65.33, samples=20 00:43:04.581 iops : min= 64, max= 96, avg=81.60, stdev=16.33, samples=20 00:43:04.581 lat (msec) : 250=100.00% 00:43:04.581 cpu : usr=97.87%, sys=1.78%, ctx=11, majf=0, minf=9 00:43:04.581 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702607: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=79, BW=316KiB/s (324kB/s)(3192KiB/10101msec) 00:43:04.581 slat (nsec): min=6336, max=32521, avg=8031.07, stdev=2515.33 00:43:04.581 clat (msec): min=110, max=338, avg=201.39, stdev=40.03 00:43:04.581 lat (msec): min=110, max=338, avg=201.39, stdev=40.03 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 111], 5.00th=[ 132], 10.00th=[ 144], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 211], 60.00th=[ 215], 00:43:04.581 | 70.00th=[ 218], 80.00th=[ 228], 90.00th=[ 239], 95.00th=[ 264], 00:43:04.581 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:43:04.581 | 99.99th=[ 338] 00:43:04.581 bw ( KiB/s): min= 224, max= 384, per=4.51%, avg=312.80, stdev=52.03, samples=20 00:43:04.581 iops : min= 56, max= 96, avg=78.20, stdev=13.01, samples=20 00:43:04.581 lat (msec) : 250=90.23%, 500=9.77% 00:43:04.581 cpu : usr=97.82%, sys=1.83%, ctx=10, majf=0, minf=9 00:43:04.581 IO depths : 1=0.5%, 2=2.1%, 4=10.5%, 8=74.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702608: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=74, BW=297KiB/s (304kB/s)(2992KiB/10072msec) 00:43:04.581 slat (nsec): min=4848, max=23508, avg=8163.73, stdev=2976.27 00:43:04.581 clat (msec): min=71, max=419, avg=215.36, stdev=53.05 00:43:04.581 lat (msec): min=71, max=419, avg=215.37, stdev=53.05 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 150], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 197], 60.00th=[ 228], 00:43:04.581 | 70.00th=[ 232], 80.00th=[ 247], 90.00th=[ 317], 95.00th=[ 334], 00:43:04.581 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 422], 99.95th=[ 422], 00:43:04.581 | 99.99th=[ 422] 00:43:04.581 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=292.70, stdev=61.43, samples=20 00:43:04.581 iops : min= 32, max= 96, avg=73.10, stdev=15.34, samples=20 00:43:04.581 lat (msec) : 100=0.80%, 250=83.69%, 500=15.51% 00:43:04.581 cpu : usr=97.51%, sys=2.14%, ctx=14, majf=0, minf=9 00:43:04.581 IO depths : 1=0.1%, 2=0.4%, 4=5.7%, 8=80.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=88.3%, 8=7.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702609: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=82, BW=329KiB/s (337kB/s)(3328KiB/10102msec) 00:43:04.581 slat (nsec): min=6335, max=22595, avg=7853.56, stdev=2087.12 00:43:04.581 clat (msec): min=19, max=334, avg=193.49, stdev=49.93 00:43:04.581 lat (msec): min=19, max=334, avg=193.50, stdev=49.93 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 20], 5.00th=[ 79], 10.00th=[ 142], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 211], 00:43:04.581 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 271], 00:43:04.581 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:43:04.581 | 99.99th=[ 334] 00:43:04.581 bw ( KiB/s): min= 224, max= 638, per=4.74%, avg=328.70, stdev=87.95, samples=20 00:43:04.581 iops : min= 56, max= 159, avg=82.15, stdev=21.90, samples=20 00:43:04.581 lat (msec) : 20=1.68%, 50=0.24%, 100=3.85%, 250=86.78%, 500=7.45% 00:43:04.581 cpu : usr=97.50%, sys=2.17%, ctx=13, majf=0, minf=9 00:43:04.581 IO depths : 1=0.6%, 2=2.2%, 4=10.6%, 8=74.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.9%, 8=4.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702610: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=78, BW=314KiB/s (322kB/s)(3168KiB/10086msec) 00:43:04.581 slat (nsec): min=6298, max=33192, avg=8800.93, stdev=3887.83 00:43:04.581 clat (msec): min=160, max=330, avg=202.80, stdev=26.59 00:43:04.581 lat (msec): min=160, max=330, avg=202.81, stdev=26.59 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 209], 60.00th=[ 213], 00:43:04.581 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 239], 00:43:04.581 | 99.00th=[ 275], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:43:04.581 | 99.99th=[ 330] 00:43:04.581 bw ( KiB/s): min= 256, max= 384, per=4.48%, avg=310.35, stdev=47.66, samples=20 00:43:04.581 iops : min= 64, max= 96, avg=77.55, stdev=11.90, samples=20 00:43:04.581 lat (msec) : 250=95.20%, 500=4.80% 00:43:04.581 cpu : usr=97.78%, sys=1.86%, ctx=24, majf=0, minf=9 00:43:04.581 IO depths : 1=0.5%, 2=1.1%, 4=8.0%, 8=78.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.2%, 8=5.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702611: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=77, BW=312KiB/s (319kB/s)(3144KiB/10077msec) 00:43:04.581 slat (nsec): min=5443, max=20426, avg=7901.97, stdev=2189.40 00:43:04.581 clat (msec): min=159, max=284, avg=204.63, stdev=27.82 00:43:04.581 lat (msec): min=159, max=284, avg=204.64, stdev=27.82 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 211], 60.00th=[ 213], 00:43:04.581 | 70.00th=[ 218], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 268], 00:43:04.581 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 284], 00:43:04.581 | 99.99th=[ 284] 00:43:04.581 bw ( KiB/s): min= 128, max= 384, per=4.43%, avg=307.90, stdev=62.93, samples=20 00:43:04.581 iops : min= 32, max= 96, avg=76.90, stdev=15.75, samples=20 00:43:04.581 lat (msec) : 250=93.13%, 500=6.87% 00:43:04.581 cpu : usr=97.71%, sys=1.96%, ctx=13, majf=0, minf=9 00:43:04.581 IO depths : 1=0.6%, 2=1.7%, 4=9.2%, 8=76.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.6%, 8=5.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702612: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=79, BW=318KiB/s (326kB/s)(3216KiB/10101msec) 00:43:04.581 slat (nsec): min=6313, max=28448, avg=8181.13, stdev=3017.64 00:43:04.581 clat (msec): min=111, max=335, avg=199.94, stdev=31.66 00:43:04.581 lat (msec): min=111, max=335, avg=199.94, stdev=31.66 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 112], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 213], 00:43:04.581 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 230], 95.00th=[ 232], 00:43:04.581 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:43:04.581 | 99.99th=[ 334] 00:43:04.581 bw ( KiB/s): min= 256, max= 384, per=4.55%, avg=315.20, stdev=48.17, samples=20 00:43:04.581 iops : min= 64, max= 96, avg=78.80, stdev=12.04, samples=20 00:43:04.581 lat (msec) : 250=95.27%, 500=4.73% 00:43:04.581 cpu : usr=97.50%, sys=2.16%, ctx=13, majf=0, minf=9 00:43:04.581 IO depths : 1=0.2%, 2=1.2%, 4=9.0%, 8=77.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=89.5%, 8=5.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702613: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10088msec) 00:43:04.581 slat (nsec): min=6564, max=49654, avg=21089.88, stdev=6219.04 00:43:04.581 clat (msec): min=152, max=435, avg=296.50, stdev=52.51 00:43:04.581 lat (msec): min=152, max=435, avg=296.52, stdev=52.51 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 153], 5.00th=[ 182], 10.00th=[ 251], 20.00th=[ 262], 00:43:04.581 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 292], 60.00th=[ 317], 00:43:04.581 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 372], 00:43:04.581 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 435], 00:43:04.581 | 99.99th=[ 435] 00:43:04.581 bw ( KiB/s): min= 127, max= 256, per=3.05%, avg=211.10, stdev=59.59, samples=20 00:43:04.581 iops : min= 31, max= 64, avg=52.70, stdev=14.92, samples=20 00:43:04.581 lat (msec) : 250=8.82%, 500=91.18% 00:43:04.581 cpu : usr=97.62%, sys=2.03%, ctx=13, majf=0, minf=9 00:43:04.581 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=702614: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10081msec) 00:43:04.581 slat (nsec): min=5896, max=33262, avg=11656.96, stdev=5129.51 00:43:04.581 clat (msec): min=152, max=449, avg=296.39, stdev=56.93 00:43:04.581 lat (msec): min=152, max=449, avg=296.40, stdev=56.93 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 232], 20.00th=[ 264], 00:43:04.581 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 288], 60.00th=[ 313], 00:43:04.581 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 422], 00:43:04.581 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:43:04.581 | 99.99th=[ 451] 00:43:04.581 bw ( KiB/s): min= 128, max= 256, per=3.05%, avg=211.10, stdev=57.97, samples=20 00:43:04.581 iops : min= 32, max= 64, avg=52.70, stdev=14.51, samples=20 00:43:04.581 lat (msec) : 250=11.03%, 500=88.97% 00:43:04.581 cpu : usr=97.72%, sys=1.94%, ctx=10, majf=0, minf=9 00:43:04.581 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=702615: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10071msec) 00:43:04.581 slat (nsec): min=6758, max=33577, avg=13936.52, stdev=4392.94 00:43:04.581 clat (msec): min=152, max=439, avg=296.07, stdev=52.98 00:43:04.581 lat (msec): min=152, max=439, avg=296.08, stdev=52.98 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 251], 20.00th=[ 262], 00:43:04.581 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 313], 60.00th=[ 317], 00:43:04.581 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 342], 95.00th=[ 351], 00:43:04.581 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:43:04.581 | 99.99th=[ 439] 00:43:04.581 bw ( KiB/s): min= 127, max= 256, per=3.05%, avg=211.10, stdev=62.67, samples=20 00:43:04.581 iops : min= 31, max= 64, avg=52.70, stdev=15.69, samples=20 00:43:04.581 lat (msec) : 250=7.35%, 500=92.65% 00:43:04.581 cpu : usr=97.75%, sys=1.92%, ctx=11, majf=0, minf=9 00:43:04.581 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=702616: Tue Dec 10 00:24:47 2024 00:43:04.581 read: IOPS=74, BW=297KiB/s (304kB/s)(2992KiB/10085msec) 00:43:04.581 slat (nsec): min=4234, max=24213, avg=7915.11, stdev=2369.77 00:43:04.581 clat (msec): min=152, max=344, avg=214.65, stdev=47.78 00:43:04.581 lat (msec): min=152, max=344, avg=214.66, stdev=47.78 00:43:04.581 clat percentiles (msec): 00:43:04.581 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.581 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 213], 00:43:04.581 | 70.00th=[ 222], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 326], 00:43:04.581 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:43:04.581 | 99.99th=[ 347] 00:43:04.581 bw ( KiB/s): min= 224, max= 384, per=4.22%, avg=292.75, stdev=50.88, samples=20 00:43:04.581 iops : min= 56, max= 96, avg=73.15, stdev=12.71, samples=20 00:43:04.581 lat (msec) : 250=71.66%, 500=28.34% 00:43:04.581 cpu : usr=97.89%, sys=1.77%, ctx=11, majf=0, minf=9 00:43:04.581 IO depths : 1=0.3%, 2=0.7%, 4=6.1%, 8=79.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:43:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 complete : 0=0.0%, 4=88.4%, 8=7.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.581 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702617: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10069msec) 00:43:04.582 slat (nsec): min=4812, max=32297, avg=8419.05, stdev=3120.46 00:43:04.582 clat (msec): min=149, max=534, avg=305.03, stdev=61.20 00:43:04.582 lat (msec): min=149, max=534, avg=305.04, stdev=61.20 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 184], 5.00th=[ 226], 10.00th=[ 255], 20.00th=[ 264], 00:43:04.582 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 300], 60.00th=[ 317], 00:43:04.582 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 430], 00:43:04.582 | 99.00th=[ 535], 99.50th=[ 535], 99.90th=[ 535], 99.95th=[ 535], 00:43:04.582 | 99.99th=[ 535] 00:43:04.582 bw ( KiB/s): min= 127, max= 256, per=3.11%, avg=215.47, stdev=57.83, samples=19 00:43:04.582 iops : min= 31, max= 64, avg=53.79, stdev=14.49, samples=19 00:43:04.582 lat (msec) : 250=8.71%, 500=88.26%, 750=3.03% 00:43:04.582 cpu : usr=97.80%, sys=1.85%, ctx=13, majf=0, minf=9 00:43:04.582 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702618: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=75, BW=300KiB/s (307kB/s)(3024KiB/10071msec) 00:43:04.582 slat (nsec): min=5560, max=25699, avg=7880.91, stdev=2038.65 00:43:04.582 clat (msec): min=164, max=440, avg=212.23, stdev=47.40 00:43:04.582 lat (msec): min=164, max=440, avg=212.24, stdev=47.40 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 178], 00:43:04.582 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 211], 60.00th=[ 213], 00:43:04.582 | 70.00th=[ 218], 80.00th=[ 228], 90.00th=[ 259], 95.00th=[ 296], 00:43:04.582 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:43:04.582 | 99.99th=[ 439] 00:43:04.582 bw ( KiB/s): min= 128, max= 384, per=4.32%, avg=299.90, stdev=58.67, samples=20 00:43:04.582 iops : min= 32, max= 96, avg=74.90, stdev=14.64, samples=20 00:43:04.582 lat (msec) : 250=89.68%, 500=10.32% 00:43:04.582 cpu : usr=97.77%, sys=1.90%, ctx=13, majf=0, minf=9 00:43:04.582 IO depths : 1=0.4%, 2=0.9%, 4=7.1%, 8=79.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=88.8%, 8=6.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702619: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=77, BW=311KiB/s (318kB/s)(3136KiB/10091msec) 00:43:04.582 slat (nsec): min=6278, max=27613, avg=7876.03, stdev=2185.48 00:43:04.582 clat (msec): min=141, max=335, avg=204.87, stdev=32.65 00:43:04.582 lat (msec): min=141, max=335, avg=204.87, stdev=32.65 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 142], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 180], 00:43:04.582 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 207], 60.00th=[ 213], 00:43:04.582 | 70.00th=[ 218], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 275], 00:43:04.582 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:43:04.582 | 99.99th=[ 338] 00:43:04.582 bw ( KiB/s): min= 223, max= 384, per=4.43%, avg=307.10, stdev=53.93, samples=20 00:43:04.582 iops : min= 55, max= 96, avg=76.70, stdev=13.55, samples=20 00:43:04.582 lat (msec) : 250=91.33%, 500=8.67% 00:43:04.582 cpu : usr=97.50%, sys=2.15%, ctx=7, majf=0, minf=9 00:43:04.582 IO depths : 1=0.5%, 2=1.4%, 4=8.5%, 8=77.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=89.3%, 8=5.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702620: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=82, BW=329KiB/s (337kB/s)(3320KiB/10102msec) 00:43:04.582 slat (nsec): min=6325, max=28929, avg=7874.06, stdev=2378.77 00:43:04.582 clat (msec): min=25, max=335, avg=193.74, stdev=40.77 00:43:04.582 lat (msec): min=25, max=335, avg=193.75, stdev=40.77 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 26], 5.00th=[ 114], 10.00th=[ 174], 20.00th=[ 178], 00:43:04.582 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 213], 00:43:04.582 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 230], 95.00th=[ 232], 00:43:04.582 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:43:04.582 | 99.99th=[ 338] 00:43:04.582 bw ( KiB/s): min= 224, max= 512, per=4.69%, avg=325.60, stdev=66.74, samples=20 00:43:04.582 iops : min= 56, max= 128, avg=81.40, stdev=16.68, samples=20 00:43:04.582 lat (msec) : 50=1.93%, 100=1.93%, 250=92.77%, 500=3.37% 00:43:04.582 cpu : usr=97.51%, sys=2.15%, ctx=13, majf=0, minf=9 00:43:04.582 IO depths : 1=0.5%, 2=1.1%, 4=7.8%, 8=78.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702621: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10067msec) 00:43:04.582 slat (nsec): min=5462, max=25486, avg=8365.95, stdev=2504.72 00:43:04.582 clat (msec): min=182, max=533, avg=303.68, stdev=62.37 00:43:04.582 lat (msec): min=182, max=533, avg=303.69, stdev=62.37 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 255], 20.00th=[ 264], 00:43:04.582 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 300], 60.00th=[ 317], 00:43:04.582 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 456], 00:43:04.582 | 99.00th=[ 535], 99.50th=[ 535], 99.90th=[ 535], 99.95th=[ 535], 00:43:04.582 | 99.99th=[ 535] 00:43:04.582 bw ( KiB/s): min= 128, max= 256, per=3.11%, avg=215.53, stdev=57.74, samples=19 00:43:04.582 iops : min= 32, max= 64, avg=53.84, stdev=14.41, samples=19 00:43:04.582 lat (msec) : 250=7.95%, 500=89.02%, 750=3.03% 00:43:04.582 cpu : usr=97.51%, sys=2.17%, ctx=13, majf=0, minf=9 00:43:04.582 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=702622: Tue Dec 10 00:24:47 2024 00:43:04.582 read: IOPS=83, BW=335KiB/s (343kB/s)(3384KiB/10106msec) 00:43:04.582 slat (nsec): min=6320, max=66422, avg=8610.57, stdev=5520.63 00:43:04.582 clat (msec): min=5, max=336, avg=190.14, stdev=47.58 00:43:04.582 lat (msec): min=5, max=336, avg=190.14, stdev=47.58 00:43:04.582 clat percentiles (msec): 00:43:04.582 | 1.00th=[ 6], 5.00th=[ 86], 10.00th=[ 163], 20.00th=[ 178], 00:43:04.582 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 211], 00:43:04.582 | 70.00th=[ 215], 80.00th=[ 220], 90.00th=[ 228], 95.00th=[ 230], 00:43:04.582 | 99.00th=[ 275], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:43:04.582 | 99.99th=[ 338] 00:43:04.582 bw ( KiB/s): min= 256, max= 640, per=4.79%, avg=332.00, stdev=86.69, samples=20 00:43:04.582 iops : min= 64, max= 160, avg=83.00, stdev=21.67, samples=20 00:43:04.582 lat (msec) : 10=1.89%, 50=1.89%, 100=1.89%, 250=91.02%, 500=3.31% 00:43:04.582 cpu : usr=97.73%, sys=1.93%, ctx=10, majf=0, minf=9 00:43:04.582 IO depths : 1=0.4%, 2=0.9%, 4=7.8%, 8=78.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:43:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:04.582 issued rwts: total=846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:04.582 00:43:04.582 Run status group 0 (all jobs): 00:43:04.582 READ: bw=6924KiB/s (7090kB/s), 210KiB/s-361KiB/s (215kB/s-369kB/s), io=68.4MiB (71.7MB), run=10067-10118msec 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 bdev_null0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 [2024-12-10 00:24:47.849199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 bdev_null1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:04.582 { 00:43:04.582 "params": { 00:43:04.582 "name": "Nvme$subsystem", 00:43:04.582 "trtype": "$TEST_TRANSPORT", 00:43:04.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:04.582 "adrfam": "ipv4", 00:43:04.582 "trsvcid": "$NVMF_PORT", 00:43:04.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:04.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:04.582 "hdgst": ${hdgst:-false}, 00:43:04.582 "ddgst": ${ddgst:-false} 00:43:04.582 }, 00:43:04.582 "method": "bdev_nvme_attach_controller" 00:43:04.582 } 00:43:04.582 EOF 00:43:04.582 )") 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:04.582 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:04.583 { 00:43:04.583 "params": { 00:43:04.583 "name": "Nvme$subsystem", 00:43:04.583 "trtype": "$TEST_TRANSPORT", 00:43:04.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:04.583 "adrfam": "ipv4", 00:43:04.583 "trsvcid": "$NVMF_PORT", 00:43:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:04.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:04.583 "hdgst": ${hdgst:-false}, 00:43:04.583 "ddgst": ${ddgst:-false} 00:43:04.583 }, 00:43:04.583 "method": "bdev_nvme_attach_controller" 00:43:04.583 } 00:43:04.583 EOF 00:43:04.583 )") 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:04.583 "params": { 00:43:04.583 "name": "Nvme0", 00:43:04.583 "trtype": "tcp", 00:43:04.583 "traddr": "10.0.0.2", 00:43:04.583 "adrfam": "ipv4", 00:43:04.583 "trsvcid": "4420", 00:43:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:04.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:04.583 "hdgst": false, 00:43:04.583 "ddgst": false 00:43:04.583 }, 00:43:04.583 "method": "bdev_nvme_attach_controller" 00:43:04.583 },{ 00:43:04.583 "params": { 00:43:04.583 "name": "Nvme1", 00:43:04.583 "trtype": "tcp", 00:43:04.583 "traddr": "10.0.0.2", 00:43:04.583 "adrfam": "ipv4", 00:43:04.583 "trsvcid": "4420", 00:43:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:04.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:04.583 "hdgst": false, 00:43:04.583 "ddgst": false 00:43:04.583 }, 00:43:04.583 "method": "bdev_nvme_attach_controller" 00:43:04.583 }' 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:04.583 00:24:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:04.583 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:04.583 ... 00:43:04.583 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:04.583 ... 00:43:04.583 fio-3.35 00:43:04.583 Starting 4 threads 00:43:09.856 00:43:09.856 filename0: (groupid=0, jobs=1): err= 0: pid=704591: Tue Dec 10 00:24:53 2024 00:43:09.856 read: IOPS=2982, BW=23.3MiB/s (24.4MB/s)(117MiB/5002msec) 00:43:09.856 slat (nsec): min=5956, max=30422, avg=8609.25, stdev=2834.44 00:43:09.856 clat (usec): min=731, max=4977, avg=2656.39, stdev=389.70 00:43:09.856 lat (usec): min=742, max=4996, avg=2665.00, stdev=389.81 00:43:09.856 clat percentiles (usec): 00:43:09.856 | 1.00th=[ 1729], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:43:09.856 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2737], 00:43:09.856 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3261], 00:43:09.856 | 99.00th=[ 3851], 99.50th=[ 4113], 99.90th=[ 4686], 99.95th=[ 4752], 00:43:09.856 | 99.99th=[ 4948] 00:43:09.856 bw ( KiB/s): min=22736, max=24560, per=27.49%, avg=23868.60, stdev=592.85, samples=10 00:43:09.856 iops : min= 2842, max= 3070, avg=2983.50, stdev=74.21, samples=10 00:43:09.856 lat (usec) : 750=0.01%, 1000=0.01% 00:43:09.856 lat (msec) : 2=3.17%, 4=96.17%, 10=0.65% 00:43:09.856 cpu : usr=93.52%, sys=6.16%, ctx=6, majf=0, minf=9 00:43:09.856 IO depths : 1=0.2%, 2=7.4%, 4=63.1%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.856 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.856 issued rwts: total=14920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.856 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:09.856 filename0: (groupid=0, jobs=1): err= 0: pid=704592: Tue Dec 10 00:24:53 2024 00:43:09.856 read: IOPS=2628, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:43:09.856 slat (usec): min=5, max=190, avg= 8.60, stdev= 3.29 00:43:09.856 clat (usec): min=861, max=5417, avg=3019.61, stdev=446.53 00:43:09.857 lat (usec): min=874, max=5428, avg=3028.21, stdev=446.24 00:43:09.857 clat percentiles (usec): 00:43:09.857 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2737], 00:43:09.857 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:43:09.857 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3851], 00:43:09.857 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5145], 00:43:09.857 | 99.99th=[ 5407] 00:43:09.857 bw ( KiB/s): min=20544, max=21696, per=24.15%, avg=20970.67, stdev=407.76, samples=9 00:43:09.857 iops : min= 2568, max= 2712, avg=2621.33, stdev=50.97, samples=9 00:43:09.857 lat (usec) : 1000=0.01% 00:43:09.857 lat (msec) : 2=0.50%, 4=95.40%, 10=4.09% 00:43:09.857 cpu : usr=94.24%, sys=5.46%, ctx=12, majf=0, minf=9 00:43:09.857 IO depths : 1=0.1%, 2=2.7%, 4=68.2%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 issued rwts: total=13143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:09.857 filename1: (groupid=0, jobs=1): err= 0: pid=704593: Tue Dec 10 00:24:53 2024 00:43:09.857 read: IOPS=2568, BW=20.1MiB/s (21.0MB/s)(100MiB/5002msec) 00:43:09.857 slat (nsec): min=5966, max=42826, avg=8521.46, stdev=2984.01 00:43:09.857 clat (usec): min=732, max=5661, avg=3089.36, stdev=431.78 00:43:09.857 lat (usec): min=744, max=5667, avg=3097.88, stdev=431.49 00:43:09.857 clat percentiles (usec): 00:43:09.857 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2868], 00:43:09.857 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3064], 00:43:09.857 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3589], 95.00th=[ 3949], 00:43:09.857 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5407], 00:43:09.857 | 99.99th=[ 5669] 00:43:09.857 bw ( KiB/s): min=19680, max=21200, per=23.57%, avg=20469.33, stdev=434.81, samples=9 00:43:09.857 iops : min= 2460, max= 2650, avg=2558.67, stdev=54.35, samples=9 00:43:09.857 lat (usec) : 750=0.01% 00:43:09.857 lat (msec) : 2=0.33%, 4=95.34%, 10=4.32% 00:43:09.857 cpu : usr=93.78%, sys=5.92%, ctx=6, majf=0, minf=9 00:43:09.857 IO depths : 1=0.1%, 2=1.4%, 4=71.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 issued rwts: total=12850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:09.857 filename1: (groupid=0, jobs=1): err= 0: pid=704594: Tue Dec 10 00:24:53 2024 00:43:09.857 read: IOPS=2674, BW=20.9MiB/s (21.9MB/s)(105MiB/5002msec) 00:43:09.857 slat (nsec): min=5962, max=36172, avg=8479.88, stdev=2891.87 00:43:09.857 clat (usec): min=614, max=5192, avg=2967.53, stdev=433.61 00:43:09.857 lat (usec): min=624, max=5205, avg=2976.01, stdev=433.40 00:43:09.857 clat percentiles (usec): 00:43:09.857 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:43:09.857 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:43:09.857 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3752], 00:43:09.857 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5080], 00:43:09.857 | 99.99th=[ 5145] 00:43:09.857 bw ( KiB/s): min=20880, max=21920, per=24.63%, avg=21384.89, stdev=398.33, samples=9 00:43:09.857 iops : min= 2610, max= 2740, avg=2673.11, stdev=49.79, samples=9 00:43:09.857 lat (usec) : 750=0.01% 00:43:09.857 lat (msec) : 2=0.81%, 4=96.22%, 10=2.96% 00:43:09.857 cpu : usr=94.08%, sys=5.64%, ctx=7, majf=0, minf=9 00:43:09.857 IO depths : 1=0.1%, 2=2.5%, 4=68.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.857 issued rwts: total=13376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:09.857 00:43:09.857 Run status group 0 (all jobs): 00:43:09.857 READ: bw=84.8MiB/s (88.9MB/s), 20.1MiB/s-23.3MiB/s (21.0MB/s-24.4MB/s), io=424MiB (445MB), run=5001-5002msec 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.857 00:43:09.857 real 0m24.588s 00:43:09.857 user 4m57.472s 00:43:09.857 sys 0m8.558s 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:09.857 ************************************ 00:43:09.857 END TEST fio_dif_rand_params 00:43:09.857 ************************************ 00:43:09.857 00:24:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:09.857 00:24:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:09.857 00:24:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:09.857 00:24:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:10.117 ************************************ 00:43:10.117 START TEST fio_dif_digest 00:43:10.117 ************************************ 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.117 bdev_null0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.117 [2024-12-10 00:24:54.397055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:10.117 { 00:43:10.117 "params": { 00:43:10.117 "name": "Nvme$subsystem", 00:43:10.117 "trtype": "$TEST_TRANSPORT", 00:43:10.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:10.117 "adrfam": "ipv4", 00:43:10.117 "trsvcid": "$NVMF_PORT", 00:43:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:10.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:10.117 "hdgst": ${hdgst:-false}, 00:43:10.117 "ddgst": ${ddgst:-false} 00:43:10.117 }, 00:43:10.117 "method": "bdev_nvme_attach_controller" 00:43:10.117 } 00:43:10.117 EOF 00:43:10.117 )") 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:10.117 "params": { 00:43:10.117 "name": "Nvme0", 00:43:10.117 "trtype": "tcp", 00:43:10.117 "traddr": "10.0.0.2", 00:43:10.117 "adrfam": "ipv4", 00:43:10.117 "trsvcid": "4420", 00:43:10.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:10.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:10.117 "hdgst": true, 00:43:10.117 "ddgst": true 00:43:10.117 }, 00:43:10.117 "method": "bdev_nvme_attach_controller" 00:43:10.117 }' 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:10.117 00:24:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.375 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:10.375 ... 00:43:10.375 fio-3.35 00:43:10.375 Starting 3 threads 00:43:22.583 00:43:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=705803: Tue Dec 10 00:25:05 2024 00:43:22.583 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(368MiB/10006msec) 00:43:22.583 slat (nsec): min=6258, max=32326, avg=11580.17, stdev=2021.23 00:43:22.583 clat (usec): min=7126, max=12841, avg=10174.71, stdev=695.91 00:43:22.583 lat (usec): min=7138, max=12852, avg=10186.29, stdev=695.85 00:43:22.583 clat percentiles (usec): 00:43:22.583 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:43:22.583 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:43:22.583 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:43:22.583 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12518], 99.95th=[12518], 00:43:22.583 | 99.99th=[12780] 00:43:22.583 bw ( KiB/s): min=37120, max=38656, per=34.91%, avg=37683.20, stdev=420.24, samples=20 00:43:22.583 iops : min= 290, max= 302, avg=294.40, stdev= 3.28, samples=20 00:43:22.583 lat (msec) : 10=39.55%, 20=60.45% 00:43:22.583 cpu : usr=92.20%, sys=7.52%, ctx=19, majf=0, minf=41 00:43:22.583 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 issued rwts: total=2946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=705804: Tue Dec 10 00:25:05 2024 00:43:22.583 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10044msec) 00:43:22.583 slat (nsec): min=6275, max=27571, avg=11632.22, stdev=1877.09 00:43:22.583 clat (usec): min=8510, max=44260, avg=11143.37, stdev=1193.08 00:43:22.583 lat (usec): min=8521, max=44272, avg=11155.00, stdev=1193.03 00:43:22.583 clat percentiles (usec): 00:43:22.583 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:43:22.583 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:43:22.583 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:43:22.583 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14353], 99.95th=[44303], 00:43:22.583 | 99.99th=[44303] 00:43:22.583 bw ( KiB/s): min=33792, max=35328, per=31.96%, avg=34496.00, stdev=453.98, samples=20 00:43:22.583 iops : min= 264, max= 276, avg=269.50, stdev= 3.55, samples=20 00:43:22.583 lat (msec) : 10=6.23%, 20=93.70%, 50=0.07% 00:43:22.583 cpu : usr=92.13%, sys=7.59%, ctx=21, majf=0, minf=82 00:43:22.583 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=705805: Tue Dec 10 00:25:05 2024 00:43:22.583 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(353MiB/10044msec) 00:43:22.583 slat (nsec): min=6237, max=51896, avg=11605.10, stdev=2025.45 00:43:22.583 clat (usec): min=8154, max=51047, avg=10630.41, stdev=1264.52 00:43:22.583 lat (usec): min=8167, max=51054, avg=10642.01, stdev=1264.47 00:43:22.583 clat percentiles (usec): 00:43:22.583 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:43:22.583 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:43:22.583 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:43:22.583 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13435], 99.95th=[47973], 00:43:22.583 | 99.99th=[51119] 00:43:22.583 bw ( KiB/s): min=35328, max=37120, per=33.50%, avg=36160.00, stdev=511.16, samples=20 00:43:22.583 iops : min= 276, max= 290, avg=282.50, stdev= 3.99, samples=20 00:43:22.583 lat (msec) : 10=20.41%, 20=79.52%, 50=0.04%, 100=0.04% 00:43:22.583 cpu : usr=92.23%, sys=7.50%, ctx=24, majf=0, minf=60 00:43:22.583 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.583 issued rwts: total=2827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:22.583 00:43:22.583 Run status group 0 (all jobs): 00:43:22.583 READ: bw=105MiB/s (111MB/s), 33.6MiB/s-36.8MiB/s (35.2MB/s-38.6MB/s), io=1059MiB (1110MB), run=10006-10044msec 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.583 00:43:22.583 real 0m11.417s 00:43:22.583 user 0m37.082s 00:43:22.583 sys 0m2.694s 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:22.583 00:25:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:22.583 ************************************ 00:43:22.583 END TEST fio_dif_digest 00:43:22.583 ************************************ 00:43:22.583 00:25:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:22.583 00:25:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:22.583 rmmod nvme_tcp 00:43:22.583 rmmod nvme_fabrics 00:43:22.583 rmmod nvme_keyring 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 696907 ']' 00:43:22.583 00:25:05 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 696907 00:43:22.583 00:25:05 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 696907 ']' 00:43:22.583 00:25:05 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 696907 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696907 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696907' 00:43:22.584 killing process with pid 696907 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@973 -- # kill 696907 00:43:22.584 00:25:05 nvmf_dif -- common/autotest_common.sh@978 -- # wait 696907 00:43:22.584 00:25:06 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:22.584 00:25:06 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:25.120 Waiting for block devices as requested 00:43:25.120 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:25.120 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:25.120 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:25.379 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:25.379 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:25.379 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:25.638 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:25.638 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:25.638 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:25.897 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:25.897 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:25.897 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:26.157 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:26.157 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:26.157 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:26.416 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:26.416 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:26.676 00:25:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:26.676 00:25:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:26.676 00:25:11 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:26.676 00:25:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:26.676 00:25:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:26.676 00:25:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:29.213 00:25:13 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:29.213 00:43:29.213 real 1m18.377s 00:43:29.213 user 7m21.793s 00:43:29.213 sys 0m29.487s 00:43:29.213 00:25:13 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:29.213 00:25:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 ************************************ 00:43:29.213 END TEST nvmf_dif 00:43:29.213 ************************************ 00:43:29.213 00:25:13 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:29.213 00:25:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:29.213 00:25:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:29.213 00:25:13 -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 ************************************ 00:43:29.213 START TEST nvmf_abort_qd_sizes 00:43:29.213 ************************************ 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:29.213 * Looking for test storage... 00:43:29.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:29.213 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:29.214 --rc genhtml_branch_coverage=1 00:43:29.214 --rc genhtml_function_coverage=1 00:43:29.214 --rc genhtml_legend=1 00:43:29.214 --rc geninfo_all_blocks=1 00:43:29.214 --rc geninfo_unexecuted_blocks=1 00:43:29.214 00:43:29.214 ' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:29.214 --rc genhtml_branch_coverage=1 00:43:29.214 --rc genhtml_function_coverage=1 00:43:29.214 --rc genhtml_legend=1 00:43:29.214 --rc geninfo_all_blocks=1 00:43:29.214 --rc geninfo_unexecuted_blocks=1 00:43:29.214 00:43:29.214 ' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:29.214 --rc genhtml_branch_coverage=1 00:43:29.214 --rc genhtml_function_coverage=1 00:43:29.214 --rc genhtml_legend=1 00:43:29.214 --rc geninfo_all_blocks=1 00:43:29.214 --rc geninfo_unexecuted_blocks=1 00:43:29.214 00:43:29.214 ' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:29.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:29.214 --rc genhtml_branch_coverage=1 00:43:29.214 --rc genhtml_function_coverage=1 00:43:29.214 --rc genhtml_legend=1 00:43:29.214 --rc geninfo_all_blocks=1 00:43:29.214 --rc geninfo_unexecuted_blocks=1 00:43:29.214 00:43:29.214 ' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:29.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:29.214 00:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:37.337 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:37.337 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:37.337 Found net devices under 0000:af:00.0: cvl_0_0 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:37.337 Found net devices under 0000:af:00.1: cvl_0_1 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:37.337 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:37.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:37.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:43:37.338 00:43:37.338 --- 10.0.0.2 ping statistics --- 00:43:37.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:37.338 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:37.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:37.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:43:37.338 00:43:37.338 --- 10.0.0.1 ping statistics --- 00:43:37.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:37.338 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:37.338 00:25:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:39.876 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:39.876 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:41.254 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=714073 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 714073 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 714073 ']' 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:41.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:41.514 00:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:41.514 [2024-12-10 00:25:25.941929] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:43:41.514 [2024-12-10 00:25:25.941984] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:41.773 [2024-12-10 00:25:26.037027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:41.773 [2024-12-10 00:25:26.079991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:41.773 [2024-12-10 00:25:26.080029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:41.773 [2024-12-10 00:25:26.080038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:41.773 [2024-12-10 00:25:26.080047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:41.773 [2024-12-10 00:25:26.080054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:41.773 [2024-12-10 00:25:26.081883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.773 [2024-12-10 00:25:26.081917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:41.773 [2024-12-10 00:25:26.081966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:41.773 [2024-12-10 00:25:26.081967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:42.341 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:42.341 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:42.341 00:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:42.341 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:42.341 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:d8:00.0 ]] 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:d8:00.0 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:42.600 00:25:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:42.600 ************************************ 00:43:42.600 START TEST spdk_target_abort 00:43:42.600 ************************************ 00:43:42.600 00:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:42.600 00:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:42.600 00:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:43:42.600 00:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.600 00:25:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.887 spdk_targetn1 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.887 [2024-12-10 00:25:29.735563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.887 [2024-12-10 00:25:29.784558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:45.887 00:25:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:49.177 Initializing NVMe Controllers 00:43:49.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:49.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:49.177 Initialization complete. Launching workers. 00:43:49.177 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17292, failed: 0 00:43:49.177 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1348, failed to submit 15944 00:43:49.177 success 751, unsuccessful 597, failed 0 00:43:49.177 00:25:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:49.177 00:25:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:52.468 Initializing NVMe Controllers 00:43:52.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:52.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:52.469 Initialization complete. Launching workers. 00:43:52.469 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8819, failed: 0 00:43:52.469 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7581 00:43:52.469 success 318, unsuccessful 920, failed 0 00:43:52.469 00:25:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:52.469 00:25:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:55.759 Initializing NVMe Controllers 00:43:55.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:55.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:55.759 Initialization complete. Launching workers. 00:43:55.759 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38494, failed: 0 00:43:55.759 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2698, failed to submit 35796 00:43:55.759 success 606, unsuccessful 2092, failed 0 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.759 00:25:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 714073 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 714073 ']' 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 714073 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714073 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714073' 00:43:57.137 killing process with pid 714073 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 714073 00:43:57.137 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 714073 00:43:57.396 00:43:57.396 real 0m14.755s 00:43:57.396 user 0m58.487s 00:43:57.396 sys 0m2.970s 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:57.396 ************************************ 00:43:57.396 END TEST spdk_target_abort 00:43:57.396 ************************************ 00:43:57.396 00:25:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:57.396 00:25:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:57.396 00:25:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:57.396 00:25:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.396 ************************************ 00:43:57.396 START TEST kernel_target_abort 00:43:57.396 ************************************ 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:57.396 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:57.397 00:25:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:00.689 Waiting for block devices as requested 00:44:00.689 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:00.948 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:00.948 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:00.948 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:01.207 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:01.207 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:01.207 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:01.466 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:01.466 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:01.466 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:01.726 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:01.726 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:01.726 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:01.726 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:01.985 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:01.985 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:02.244 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:44:02.244 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:02.245 No valid GPT data, bailing 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:02.245 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:44:02.504 00:44:02.504 Discovery Log Number of Records 2, Generation counter 2 00:44:02.504 =====Discovery Log Entry 0====== 00:44:02.504 trtype: tcp 00:44:02.504 adrfam: ipv4 00:44:02.504 subtype: current discovery subsystem 00:44:02.504 treq: not specified, sq flow control disable supported 00:44:02.504 portid: 1 00:44:02.504 trsvcid: 4420 00:44:02.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:02.504 traddr: 10.0.0.1 00:44:02.504 eflags: none 00:44:02.504 sectype: none 00:44:02.504 =====Discovery Log Entry 1====== 00:44:02.504 trtype: tcp 00:44:02.504 adrfam: ipv4 00:44:02.504 subtype: nvme subsystem 00:44:02.504 treq: not specified, sq flow control disable supported 00:44:02.504 portid: 1 00:44:02.504 trsvcid: 4420 00:44:02.504 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:02.504 traddr: 10.0.0.1 00:44:02.504 eflags: none 00:44:02.504 sectype: none 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:02.504 00:25:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:05.812 Initializing NVMe Controllers 00:44:05.812 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:05.812 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:05.812 Initialization complete. Launching workers. 00:44:05.812 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72317, failed: 0 00:44:05.812 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 72317, failed to submit 0 00:44:05.812 success 0, unsuccessful 72317, failed 0 00:44:05.812 00:25:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:05.813 00:25:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:09.244 Initializing NVMe Controllers 00:44:09.244 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:09.244 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:09.244 Initialization complete. Launching workers. 00:44:09.244 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134989, failed: 0 00:44:09.244 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26046, failed to submit 108943 00:44:09.244 success 0, unsuccessful 26046, failed 0 00:44:09.244 00:25:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:09.244 00:25:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:11.780 Initializing NVMe Controllers 00:44:11.780 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:11.780 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:11.780 Initialization complete. Launching workers. 00:44:11.780 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123281, failed: 0 00:44:11.780 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30838, failed to submit 92443 00:44:11.780 success 0, unsuccessful 30838, failed 0 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:11.780 00:25:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:15.975 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:15.975 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:16.913 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:44:16.913 00:44:16.913 real 0m19.639s 00:44:16.913 user 0m8.431s 00:44:16.913 sys 0m6.491s 00:44:16.913 00:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.913 00:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:16.913 ************************************ 00:44:16.913 END TEST kernel_target_abort 00:44:16.913 ************************************ 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:17.173 rmmod nvme_tcp 00:44:17.173 rmmod nvme_fabrics 00:44:17.173 rmmod nvme_keyring 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 714073 ']' 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 714073 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 714073 ']' 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 714073 00:44:17.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (714073) - No such process 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 714073 is not found' 00:44:17.173 Process with pid 714073 is not found 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:17.173 00:26:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:20.464 Waiting for block devices as requested 00:44:20.464 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:20.464 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:20.723 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:20.723 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:20.723 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:20.981 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:20.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:20.981 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:21.240 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:21.240 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:21.240 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:21.240 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:21.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:21.498 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:21.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:21.756 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:21.756 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:22.015 00:26:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:24.554 00:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:24.554 00:44:24.554 real 0m55.243s 00:44:24.554 user 1m12.009s 00:44:24.554 sys 0m20.433s 00:44:24.554 00:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:24.554 00:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:24.554 ************************************ 00:44:24.554 END TEST nvmf_abort_qd_sizes 00:44:24.554 ************************************ 00:44:24.554 00:26:08 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:24.554 00:26:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:24.554 00:26:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:24.554 00:26:08 -- common/autotest_common.sh@10 -- # set +x 00:44:24.554 ************************************ 00:44:24.554 START TEST keyring_file 00:44:24.554 ************************************ 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:24.554 * Looking for test storage... 00:44:24.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.554 --rc genhtml_branch_coverage=1 00:44:24.554 --rc genhtml_function_coverage=1 00:44:24.554 --rc genhtml_legend=1 00:44:24.554 --rc geninfo_all_blocks=1 00:44:24.554 --rc geninfo_unexecuted_blocks=1 00:44:24.554 00:44:24.554 ' 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.554 --rc genhtml_branch_coverage=1 00:44:24.554 --rc genhtml_function_coverage=1 00:44:24.554 --rc genhtml_legend=1 00:44:24.554 --rc geninfo_all_blocks=1 00:44:24.554 --rc geninfo_unexecuted_blocks=1 00:44:24.554 00:44:24.554 ' 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.554 --rc genhtml_branch_coverage=1 00:44:24.554 --rc genhtml_function_coverage=1 00:44:24.554 --rc genhtml_legend=1 00:44:24.554 --rc geninfo_all_blocks=1 00:44:24.554 --rc geninfo_unexecuted_blocks=1 00:44:24.554 00:44:24.554 ' 00:44:24.554 00:26:08 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:24.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:24.554 --rc genhtml_branch_coverage=1 00:44:24.554 --rc genhtml_function_coverage=1 00:44:24.554 --rc genhtml_legend=1 00:44:24.554 --rc geninfo_all_blocks=1 00:44:24.554 --rc geninfo_unexecuted_blocks=1 00:44:24.554 00:44:24.554 ' 00:44:24.554 00:26:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:24.554 00:26:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:24.554 00:26:08 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:24.554 00:26:08 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:24.555 00:26:08 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:24.555 00:26:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.555 00:26:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.555 00:26:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.555 00:26:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:24.555 00:26:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:24.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FQyTc0DGxI 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FQyTc0DGxI 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FQyTc0DGxI 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FQyTc0DGxI 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RhDuoGIC8T 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:24.555 00:26:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RhDuoGIC8T 00:44:24.555 00:26:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RhDuoGIC8T 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RhDuoGIC8T 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=723492 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:24.555 00:26:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 723492 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 723492 ']' 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:24.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:24.555 00:26:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:24.555 [2024-12-10 00:26:08.912609] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:44:24.555 [2024-12-10 00:26:08.912662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723492 ] 00:44:24.555 [2024-12-10 00:26:09.000126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.815 [2024-12-10 00:26:09.039185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:25.384 00:26:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:25.384 [2024-12-10 00:26:09.742021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:25.384 null0 00:44:25.384 [2024-12-10 00:26:09.774017] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:25.384 [2024-12-10 00:26:09.774384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.384 00:26:09 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:25.384 [2024-12-10 00:26:09.806094] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:25.384 request: 00:44:25.384 { 00:44:25.384 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:25.384 "secure_channel": false, 00:44:25.384 "listen_address": { 00:44:25.384 "trtype": "tcp", 00:44:25.384 "traddr": "127.0.0.1", 00:44:25.384 "trsvcid": "4420" 00:44:25.384 }, 00:44:25.384 "method": "nvmf_subsystem_add_listener", 00:44:25.384 "req_id": 1 00:44:25.384 } 00:44:25.384 Got JSON-RPC error response 00:44:25.384 response: 00:44:25.384 { 00:44:25.384 "code": -32602, 00:44:25.384 "message": "Invalid parameters" 00:44:25.384 } 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:25.384 00:26:09 keyring_file -- keyring/file.sh@47 -- # bperfpid=723555 00:44:25.384 00:26:09 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:25.384 00:26:09 keyring_file -- keyring/file.sh@49 -- # waitforlisten 723555 /var/tmp/bperf.sock 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 723555 ']' 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:25.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:25.384 00:26:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:25.643 [2024-12-10 00:26:09.861542] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:44:25.643 [2024-12-10 00:26:09.861587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723555 ] 00:44:25.643 [2024-12-10 00:26:09.950286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:25.643 [2024-12-10 00:26:09.990186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:25.643 00:26:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:25.643 00:26:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:25.643 00:26:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:25.643 00:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:25.902 00:26:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RhDuoGIC8T 00:44:25.902 00:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RhDuoGIC8T 00:44:26.161 00:26:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:26.161 00:26:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:26.161 00:26:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:26.161 00:26:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:26.161 00:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:26.420 00:26:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FQyTc0DGxI == \/\t\m\p\/\t\m\p\.\F\Q\y\T\c\0\D\G\x\I ]] 00:44:26.420 00:26:10 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:26.420 00:26:10 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:26.420 00:26:10 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.RhDuoGIC8T == \/\t\m\p\/\t\m\p\.\R\h\D\u\o\G\I\C\8\T ]] 00:44:26.420 00:26:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:26.420 00:26:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:26.679 00:26:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:26.679 00:26:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:26.679 00:26:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:26.679 00:26:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:26.679 00:26:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:26.679 00:26:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:26.679 00:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:26.939 00:26:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:26.939 00:26:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:26.939 00:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:27.198 [2024-12-10 00:26:11.429716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:27.198 nvme0n1 00:44:27.198 00:26:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:27.198 00:26:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:27.198 00:26:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:27.198 00:26:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.198 00:26:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:27.198 00:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.457 00:26:11 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:27.457 00:26:11 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:27.457 00:26:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:27.457 00:26:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:27.457 00:26:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.457 00:26:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:27.457 00:26:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.457 00:26:11 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:27.457 00:26:11 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:27.716 Running I/O for 1 seconds... 00:44:28.654 18538.00 IOPS, 72.41 MiB/s 00:44:28.654 Latency(us) 00:44:28.654 [2024-12-09T23:26:13.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:28.654 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:28.654 nvme0n1 : 1.00 18584.69 72.60 0.00 0.00 6875.15 2804.94 10538.19 00:44:28.654 [2024-12-09T23:26:13.127Z] =================================================================================================================== 00:44:28.654 [2024-12-09T23:26:13.127Z] Total : 18584.69 72.60 0.00 0.00 6875.15 2804.94 10538.19 00:44:28.654 { 00:44:28.654 "results": [ 00:44:28.654 { 00:44:28.654 "job": "nvme0n1", 00:44:28.654 "core_mask": "0x2", 00:44:28.654 "workload": "randrw", 00:44:28.654 "percentage": 50, 00:44:28.654 "status": "finished", 00:44:28.654 "queue_depth": 128, 00:44:28.654 "io_size": 4096, 00:44:28.654 "runtime": 1.004429, 00:44:28.654 "iops": 18584.68841500992, 00:44:28.654 "mibps": 72.5964391211325, 00:44:28.654 "io_failed": 0, 00:44:28.654 "io_timeout": 0, 00:44:28.654 "avg_latency_us": 6875.146993175122, 00:44:28.654 "min_latency_us": 2804.9408, 00:44:28.654 "max_latency_us": 10538.1888 00:44:28.654 } 00:44:28.654 ], 00:44:28.654 "core_count": 1 00:44:28.654 } 00:44:28.654 00:26:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:28.654 00:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:28.914 00:26:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:28.914 00:26:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:28.914 00:26:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:28.914 00:26:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.914 00:26:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:28.914 00:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.173 00:26:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:29.173 00:26:13 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:29.173 00:26:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:29.173 00:26:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.173 00:26:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.173 00:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.173 00:26:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:29.173 00:26:13 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:29.173 00:26:13 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:29.173 00:26:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:29.173 00:26:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:29.173 00:26:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:29.173 00:26:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:29.173 00:26:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:29.433 [2024-12-10 00:26:13.816619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:29.433 [2024-12-10 00:26:13.817550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd41460 (107): Transport endpoint is not connected 00:44:29.433 [2024-12-10 00:26:13.818544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd41460 (9): Bad file descriptor 00:44:29.433 [2024-12-10 00:26:13.819545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:29.433 [2024-12-10 00:26:13.819557] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:29.433 [2024-12-10 00:26:13.819566] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:29.433 [2024-12-10 00:26:13.819576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:29.433 request: 00:44:29.433 { 00:44:29.433 "name": "nvme0", 00:44:29.433 "trtype": "tcp", 00:44:29.433 "traddr": "127.0.0.1", 00:44:29.433 "adrfam": "ipv4", 00:44:29.433 "trsvcid": "4420", 00:44:29.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:29.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:29.433 "prchk_reftag": false, 00:44:29.433 "prchk_guard": false, 00:44:29.433 "hdgst": false, 00:44:29.433 "ddgst": false, 00:44:29.433 "psk": "key1", 00:44:29.433 "allow_unrecognized_csi": false, 00:44:29.433 "method": "bdev_nvme_attach_controller", 00:44:29.433 "req_id": 1 00:44:29.433 } 00:44:29.433 Got JSON-RPC error response 00:44:29.433 response: 00:44:29.433 { 00:44:29.433 "code": -5, 00:44:29.433 "message": "Input/output error" 00:44:29.433 } 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:29.433 00:26:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:29.433 00:26:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:29.433 00:26:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.692 00:26:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:29.692 00:26:14 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:29.692 00:26:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:29.692 00:26:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.692 00:26:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.692 00:26:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:29.692 00:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.951 00:26:14 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:29.951 00:26:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:29.951 00:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:30.210 00:26:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:30.210 00:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:30.210 00:26:14 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:30.210 00:26:14 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:30.210 00:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.469 00:26:14 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:30.469 00:26:14 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FQyTc0DGxI 00:44:30.469 00:26:14 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.469 00:26:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.469 00:26:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.728 [2024-12-10 00:26:15.019658] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FQyTc0DGxI': 0100660 00:44:30.728 [2024-12-10 00:26:15.019684] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:30.728 request: 00:44:30.728 { 00:44:30.728 "name": "key0", 00:44:30.728 "path": "/tmp/tmp.FQyTc0DGxI", 00:44:30.728 "method": "keyring_file_add_key", 00:44:30.728 "req_id": 1 00:44:30.728 } 00:44:30.728 Got JSON-RPC error response 00:44:30.728 response: 00:44:30.728 { 00:44:30.728 "code": -1, 00:44:30.728 "message": "Operation not permitted" 00:44:30.728 } 00:44:30.728 00:26:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:30.728 00:26:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:30.728 00:26:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:30.728 00:26:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:30.728 00:26:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FQyTc0DGxI 00:44:30.728 00:26:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.728 00:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FQyTc0DGxI 00:44:30.987 00:26:15 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FQyTc0DGxI 00:44:30.987 00:26:15 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.987 00:26:15 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:30.987 00:26:15 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.987 00:26:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:30.987 00:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:31.246 [2024-12-10 00:26:15.597181] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FQyTc0DGxI': No such file or directory 00:44:31.246 [2024-12-10 00:26:15.597204] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:31.246 [2024-12-10 00:26:15.597222] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:31.246 [2024-12-10 00:26:15.597230] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:31.246 [2024-12-10 00:26:15.597240] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:31.246 [2024-12-10 00:26:15.597247] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:31.246 request: 00:44:31.246 { 00:44:31.246 "name": "nvme0", 00:44:31.246 "trtype": "tcp", 00:44:31.246 "traddr": "127.0.0.1", 00:44:31.246 "adrfam": "ipv4", 00:44:31.246 "trsvcid": "4420", 00:44:31.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:31.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:31.246 "prchk_reftag": false, 00:44:31.246 "prchk_guard": false, 00:44:31.246 "hdgst": false, 00:44:31.246 "ddgst": false, 00:44:31.246 "psk": "key0", 00:44:31.246 "allow_unrecognized_csi": false, 00:44:31.246 "method": "bdev_nvme_attach_controller", 00:44:31.246 "req_id": 1 00:44:31.246 } 00:44:31.246 Got JSON-RPC error response 00:44:31.246 response: 00:44:31.246 { 00:44:31.246 "code": -19, 00:44:31.246 "message": "No such device" 00:44:31.246 } 00:44:31.246 00:26:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:31.246 00:26:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:31.246 00:26:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:31.246 00:26:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:31.246 00:26:15 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:31.246 00:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:31.505 00:26:15 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FJ6h8NaSjI 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:31.505 00:26:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FJ6h8NaSjI 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FJ6h8NaSjI 00:44:31.505 00:26:15 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FJ6h8NaSjI 00:44:31.505 00:26:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FJ6h8NaSjI 00:44:31.505 00:26:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FJ6h8NaSjI 00:44:31.764 00:26:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:31.764 00:26:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.024 nvme0n1 00:44:32.024 00:26:16 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.024 00:26:16 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:32.024 00:26:16 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:32.024 00:26:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:32.284 00:26:16 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:32.284 00:26:16 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:32.284 00:26:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.284 00:26:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.284 00:26:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.543 00:26:16 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:32.543 00:26:16 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:32.543 00:26:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:32.543 00:26:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.543 00:26:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.543 00:26:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.543 00:26:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.802 00:26:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:32.802 00:26:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:32.802 00:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:32.802 00:26:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:32.802 00:26:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:32.802 00:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.061 00:26:17 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:33.061 00:26:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FJ6h8NaSjI 00:44:33.061 00:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FJ6h8NaSjI 00:44:33.320 00:26:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RhDuoGIC8T 00:44:33.320 00:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RhDuoGIC8T 00:44:33.579 00:26:17 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.579 00:26:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.579 nvme0n1 00:44:33.838 00:26:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:33.838 00:26:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:33.838 00:26:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:33.838 "subsystems": [ 00:44:33.838 { 00:44:33.838 "subsystem": "keyring", 00:44:33.838 "config": [ 00:44:33.838 { 00:44:33.838 "method": "keyring_file_add_key", 00:44:33.838 "params": { 00:44:33.838 "name": "key0", 00:44:33.838 "path": "/tmp/tmp.FJ6h8NaSjI" 00:44:33.838 } 00:44:33.838 }, 00:44:33.838 { 00:44:33.838 "method": "keyring_file_add_key", 00:44:33.838 "params": { 00:44:33.838 "name": "key1", 00:44:33.838 "path": "/tmp/tmp.RhDuoGIC8T" 00:44:33.838 } 00:44:33.838 } 00:44:33.838 ] 00:44:33.838 }, 00:44:33.838 { 00:44:33.838 "subsystem": "iobuf", 00:44:33.838 "config": [ 00:44:33.838 { 00:44:33.838 "method": "iobuf_set_options", 00:44:33.838 "params": { 00:44:33.838 "small_pool_count": 8192, 00:44:33.838 "large_pool_count": 1024, 00:44:33.838 "small_bufsize": 8192, 00:44:33.838 "large_bufsize": 135168, 00:44:33.838 "enable_numa": false 00:44:33.838 } 00:44:33.838 } 00:44:33.838 ] 00:44:33.838 }, 00:44:33.838 { 00:44:33.838 "subsystem": "sock", 00:44:33.838 "config": [ 00:44:33.838 { 00:44:33.838 "method": "sock_set_default_impl", 00:44:33.838 "params": { 00:44:33.838 "impl_name": "posix" 00:44:33.838 } 00:44:33.838 }, 00:44:33.838 { 00:44:33.838 "method": "sock_impl_set_options", 00:44:33.838 "params": { 00:44:33.838 "impl_name": "ssl", 00:44:33.838 "recv_buf_size": 4096, 00:44:33.838 "send_buf_size": 4096, 00:44:33.838 "enable_recv_pipe": true, 00:44:33.838 "enable_quickack": false, 00:44:33.838 "enable_placement_id": 0, 00:44:33.838 "enable_zerocopy_send_server": true, 00:44:33.838 "enable_zerocopy_send_client": false, 00:44:33.838 "zerocopy_threshold": 0, 00:44:33.838 "tls_version": 0, 00:44:33.838 "enable_ktls": false 00:44:33.838 } 00:44:33.838 }, 00:44:33.838 { 00:44:33.839 "method": "sock_impl_set_options", 00:44:33.839 "params": { 00:44:33.839 "impl_name": "posix", 00:44:33.839 "recv_buf_size": 2097152, 00:44:33.839 "send_buf_size": 2097152, 00:44:33.839 "enable_recv_pipe": true, 00:44:33.839 "enable_quickack": false, 00:44:33.839 "enable_placement_id": 0, 00:44:33.839 "enable_zerocopy_send_server": true, 00:44:33.839 "enable_zerocopy_send_client": false, 00:44:33.839 "zerocopy_threshold": 0, 00:44:33.839 "tls_version": 0, 00:44:33.839 "enable_ktls": false 00:44:33.839 } 00:44:33.839 } 00:44:33.839 ] 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "subsystem": "vmd", 00:44:33.839 "config": [] 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "subsystem": "accel", 00:44:33.839 "config": [ 00:44:33.839 { 00:44:33.839 "method": "accel_set_options", 00:44:33.839 "params": { 00:44:33.839 "small_cache_size": 128, 00:44:33.839 "large_cache_size": 16, 00:44:33.839 "task_count": 2048, 00:44:33.839 "sequence_count": 2048, 00:44:33.839 "buf_count": 2048 00:44:33.839 } 00:44:33.839 } 00:44:33.839 ] 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "subsystem": "bdev", 00:44:33.839 "config": [ 00:44:33.839 { 00:44:33.839 "method": "bdev_set_options", 00:44:33.839 "params": { 00:44:33.839 "bdev_io_pool_size": 65535, 00:44:33.839 "bdev_io_cache_size": 256, 00:44:33.839 "bdev_auto_examine": true, 00:44:33.839 "iobuf_small_cache_size": 128, 00:44:33.839 "iobuf_large_cache_size": 16 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_raid_set_options", 00:44:33.839 "params": { 00:44:33.839 "process_window_size_kb": 1024, 00:44:33.839 "process_max_bandwidth_mb_sec": 0 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_iscsi_set_options", 00:44:33.839 "params": { 00:44:33.839 "timeout_sec": 30 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_nvme_set_options", 00:44:33.839 "params": { 00:44:33.839 "action_on_timeout": "none", 00:44:33.839 "timeout_us": 0, 00:44:33.839 "timeout_admin_us": 0, 00:44:33.839 "keep_alive_timeout_ms": 10000, 00:44:33.839 "arbitration_burst": 0, 00:44:33.839 "low_priority_weight": 0, 00:44:33.839 "medium_priority_weight": 0, 00:44:33.839 "high_priority_weight": 0, 00:44:33.839 "nvme_adminq_poll_period_us": 10000, 00:44:33.839 "nvme_ioq_poll_period_us": 0, 00:44:33.839 "io_queue_requests": 512, 00:44:33.839 "delay_cmd_submit": true, 00:44:33.839 "transport_retry_count": 4, 00:44:33.839 "bdev_retry_count": 3, 00:44:33.839 "transport_ack_timeout": 0, 00:44:33.839 "ctrlr_loss_timeout_sec": 0, 00:44:33.839 "reconnect_delay_sec": 0, 00:44:33.839 "fast_io_fail_timeout_sec": 0, 00:44:33.839 "disable_auto_failback": false, 00:44:33.839 "generate_uuids": false, 00:44:33.839 "transport_tos": 0, 00:44:33.839 "nvme_error_stat": false, 00:44:33.839 "rdma_srq_size": 0, 00:44:33.839 "io_path_stat": false, 00:44:33.839 "allow_accel_sequence": false, 00:44:33.839 "rdma_max_cq_size": 0, 00:44:33.839 "rdma_cm_event_timeout_ms": 0, 00:44:33.839 "dhchap_digests": [ 00:44:33.839 "sha256", 00:44:33.839 "sha384", 00:44:33.839 "sha512" 00:44:33.839 ], 00:44:33.839 "dhchap_dhgroups": [ 00:44:33.839 "null", 00:44:33.839 "ffdhe2048", 00:44:33.839 "ffdhe3072", 00:44:33.839 "ffdhe4096", 00:44:33.839 "ffdhe6144", 00:44:33.839 "ffdhe8192" 00:44:33.839 ] 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_nvme_attach_controller", 00:44:33.839 "params": { 00:44:33.839 "name": "nvme0", 00:44:33.839 "trtype": "TCP", 00:44:33.839 "adrfam": "IPv4", 00:44:33.839 "traddr": "127.0.0.1", 00:44:33.839 "trsvcid": "4420", 00:44:33.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:33.839 "prchk_reftag": false, 00:44:33.839 "prchk_guard": false, 00:44:33.839 "ctrlr_loss_timeout_sec": 0, 00:44:33.839 "reconnect_delay_sec": 0, 00:44:33.839 "fast_io_fail_timeout_sec": 0, 00:44:33.839 "psk": "key0", 00:44:33.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:33.839 "hdgst": false, 00:44:33.839 "ddgst": false, 00:44:33.839 "multipath": "multipath" 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_nvme_set_hotplug", 00:44:33.839 "params": { 00:44:33.839 "period_us": 100000, 00:44:33.839 "enable": false 00:44:33.839 } 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "method": "bdev_wait_for_examine" 00:44:33.839 } 00:44:33.839 ] 00:44:33.839 }, 00:44:33.839 { 00:44:33.839 "subsystem": "nbd", 00:44:33.839 "config": [] 00:44:33.839 } 00:44:33.839 ] 00:44:33.839 }' 00:44:33.839 00:26:18 keyring_file -- keyring/file.sh@115 -- # killprocess 723555 00:44:33.839 00:26:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 723555 ']' 00:44:33.839 00:26:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 723555 00:44:33.839 00:26:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:33.839 00:26:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:33.839 00:26:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723555 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723555' 00:44:34.101 killing process with pid 723555 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@973 -- # kill 723555 00:44:34.101 Received shutdown signal, test time was about 1.000000 seconds 00:44:34.101 00:44:34.101 Latency(us) 00:44:34.101 [2024-12-09T23:26:18.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:34.101 [2024-12-09T23:26:18.574Z] =================================================================================================================== 00:44:34.101 [2024-12-09T23:26:18.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@978 -- # wait 723555 00:44:34.101 00:26:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=725197 00:44:34.101 00:26:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 725197 /var/tmp/bperf.sock 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 725197 ']' 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:34.101 00:26:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:34.101 00:26:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:34.101 00:26:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:34.101 "subsystems": [ 00:44:34.101 { 00:44:34.101 "subsystem": "keyring", 00:44:34.101 "config": [ 00:44:34.101 { 00:44:34.101 "method": "keyring_file_add_key", 00:44:34.101 "params": { 00:44:34.101 "name": "key0", 00:44:34.101 "path": "/tmp/tmp.FJ6h8NaSjI" 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "keyring_file_add_key", 00:44:34.101 "params": { 00:44:34.101 "name": "key1", 00:44:34.101 "path": "/tmp/tmp.RhDuoGIC8T" 00:44:34.101 } 00:44:34.101 } 00:44:34.101 ] 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "subsystem": "iobuf", 00:44:34.101 "config": [ 00:44:34.101 { 00:44:34.101 "method": "iobuf_set_options", 00:44:34.101 "params": { 00:44:34.101 "small_pool_count": 8192, 00:44:34.101 "large_pool_count": 1024, 00:44:34.101 "small_bufsize": 8192, 00:44:34.101 "large_bufsize": 135168, 00:44:34.101 "enable_numa": false 00:44:34.101 } 00:44:34.101 } 00:44:34.101 ] 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "subsystem": "sock", 00:44:34.101 "config": [ 00:44:34.101 { 00:44:34.101 "method": "sock_set_default_impl", 00:44:34.101 "params": { 00:44:34.101 "impl_name": "posix" 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "sock_impl_set_options", 00:44:34.101 "params": { 00:44:34.101 "impl_name": "ssl", 00:44:34.101 "recv_buf_size": 4096, 00:44:34.101 "send_buf_size": 4096, 00:44:34.101 "enable_recv_pipe": true, 00:44:34.101 "enable_quickack": false, 00:44:34.101 "enable_placement_id": 0, 00:44:34.101 "enable_zerocopy_send_server": true, 00:44:34.101 "enable_zerocopy_send_client": false, 00:44:34.101 "zerocopy_threshold": 0, 00:44:34.101 "tls_version": 0, 00:44:34.101 "enable_ktls": false 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "sock_impl_set_options", 00:44:34.101 "params": { 00:44:34.101 "impl_name": "posix", 00:44:34.101 "recv_buf_size": 2097152, 00:44:34.101 "send_buf_size": 2097152, 00:44:34.101 "enable_recv_pipe": true, 00:44:34.101 "enable_quickack": false, 00:44:34.101 "enable_placement_id": 0, 00:44:34.101 "enable_zerocopy_send_server": true, 00:44:34.101 "enable_zerocopy_send_client": false, 00:44:34.101 "zerocopy_threshold": 0, 00:44:34.101 "tls_version": 0, 00:44:34.101 "enable_ktls": false 00:44:34.101 } 00:44:34.101 } 00:44:34.101 ] 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "subsystem": "vmd", 00:44:34.101 "config": [] 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "subsystem": "accel", 00:44:34.101 "config": [ 00:44:34.101 { 00:44:34.101 "method": "accel_set_options", 00:44:34.101 "params": { 00:44:34.101 "small_cache_size": 128, 00:44:34.101 "large_cache_size": 16, 00:44:34.101 "task_count": 2048, 00:44:34.101 "sequence_count": 2048, 00:44:34.101 "buf_count": 2048 00:44:34.101 } 00:44:34.101 } 00:44:34.101 ] 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "subsystem": "bdev", 00:44:34.101 "config": [ 00:44:34.101 { 00:44:34.101 "method": "bdev_set_options", 00:44:34.101 "params": { 00:44:34.101 "bdev_io_pool_size": 65535, 00:44:34.101 "bdev_io_cache_size": 256, 00:44:34.101 "bdev_auto_examine": true, 00:44:34.101 "iobuf_small_cache_size": 128, 00:44:34.101 "iobuf_large_cache_size": 16 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "bdev_raid_set_options", 00:44:34.101 "params": { 00:44:34.101 "process_window_size_kb": 1024, 00:44:34.101 "process_max_bandwidth_mb_sec": 0 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "bdev_iscsi_set_options", 00:44:34.101 "params": { 00:44:34.101 "timeout_sec": 30 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "bdev_nvme_set_options", 00:44:34.101 "params": { 00:44:34.101 "action_on_timeout": "none", 00:44:34.101 "timeout_us": 0, 00:44:34.101 "timeout_admin_us": 0, 00:44:34.101 "keep_alive_timeout_ms": 10000, 00:44:34.101 "arbitration_burst": 0, 00:44:34.101 "low_priority_weight": 0, 00:44:34.101 "medium_priority_weight": 0, 00:44:34.101 "high_priority_weight": 0, 00:44:34.101 "nvme_adminq_poll_period_us": 10000, 00:44:34.101 "nvme_ioq_poll_period_us": 0, 00:44:34.101 "io_queue_requests": 512, 00:44:34.101 "delay_cmd_submit": true, 00:44:34.101 "transport_retry_count": 4, 00:44:34.101 "bdev_retry_count": 3, 00:44:34.101 "transport_ack_timeout": 0, 00:44:34.101 "ctrlr_loss_timeout_sec": 0, 00:44:34.101 "reconnect_delay_sec": 0, 00:44:34.101 "fast_io_fail_timeout_sec": 0, 00:44:34.101 "disable_auto_failback": false, 00:44:34.101 "generate_uuids": false, 00:44:34.101 "transport_tos": 0, 00:44:34.101 "nvme_error_stat": false, 00:44:34.101 "rdma_srq_size": 0, 00:44:34.101 "io_path_stat": false, 00:44:34.101 "allow_accel_sequence": false, 00:44:34.101 "rdma_max_cq_size": 0, 00:44:34.101 "rdma_cm_event_timeout_ms": 0, 00:44:34.101 "dhchap_digests": [ 00:44:34.101 "sha256", 00:44:34.101 "sha384", 00:44:34.101 "sha512" 00:44:34.101 ], 00:44:34.101 "dhchap_dhgroups": [ 00:44:34.101 "null", 00:44:34.101 "ffdhe2048", 00:44:34.101 "ffdhe3072", 00:44:34.101 "ffdhe4096", 00:44:34.101 "ffdhe6144", 00:44:34.101 "ffdhe8192" 00:44:34.101 ] 00:44:34.101 } 00:44:34.101 }, 00:44:34.101 { 00:44:34.101 "method": "bdev_nvme_attach_controller", 00:44:34.101 "params": { 00:44:34.101 "name": "nvme0", 00:44:34.101 "trtype": "TCP", 00:44:34.101 "adrfam": "IPv4", 00:44:34.101 "traddr": "127.0.0.1", 00:44:34.101 "trsvcid": "4420", 00:44:34.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:34.101 "prchk_reftag": false, 00:44:34.101 "prchk_guard": false, 00:44:34.101 "ctrlr_loss_timeout_sec": 0, 00:44:34.101 "reconnect_delay_sec": 0, 00:44:34.101 "fast_io_fail_timeout_sec": 0, 00:44:34.102 "psk": "key0", 00:44:34.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:34.102 "hdgst": false, 00:44:34.102 "ddgst": false, 00:44:34.102 "multipath": "multipath" 00:44:34.102 } 00:44:34.102 }, 00:44:34.102 { 00:44:34.102 "method": "bdev_nvme_set_hotplug", 00:44:34.102 "params": { 00:44:34.102 "period_us": 100000, 00:44:34.102 "enable": false 00:44:34.102 } 00:44:34.102 }, 00:44:34.102 { 00:44:34.102 "method": "bdev_wait_for_examine" 00:44:34.102 } 00:44:34.102 ] 00:44:34.102 }, 00:44:34.102 { 00:44:34.102 "subsystem": "nbd", 00:44:34.102 "config": [] 00:44:34.102 } 00:44:34.102 ] 00:44:34.102 }' 00:44:34.102 00:26:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:34.361 [2024-12-10 00:26:18.574033] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:44:34.361 [2024-12-10 00:26:18.574086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725197 ] 00:44:34.361 [2024-12-10 00:26:18.659512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:34.361 [2024-12-10 00:26:18.697539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:34.621 [2024-12-10 00:26:18.859593] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:35.190 00:26:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:35.190 00:26:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:35.190 00:26:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:35.190 00:26:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.190 00:26:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:35.190 00:26:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:35.190 00:26:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.449 00:26:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:35.450 00:26:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:35.450 00:26:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:35.450 00:26:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:35.450 00:26:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:35.450 00:26:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.450 00:26:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:35.708 00:26:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:35.709 00:26:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:35.709 00:26:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:35.709 00:26:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:35.968 00:26:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:35.968 00:26:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:35.968 00:26:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FJ6h8NaSjI /tmp/tmp.RhDuoGIC8T 00:44:35.968 00:26:20 keyring_file -- keyring/file.sh@20 -- # killprocess 725197 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 725197 ']' 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 725197 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725197 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725197' 00:44:35.968 killing process with pid 725197 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@973 -- # kill 725197 00:44:35.968 Received shutdown signal, test time was about 1.000000 seconds 00:44:35.968 00:44:35.968 Latency(us) 00:44:35.968 [2024-12-09T23:26:20.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:35.968 [2024-12-09T23:26:20.441Z] =================================================================================================================== 00:44:35.968 [2024-12-09T23:26:20.441Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:35.968 00:26:20 keyring_file -- common/autotest_common.sh@978 -- # wait 725197 00:44:36.227 00:26:20 keyring_file -- keyring/file.sh@21 -- # killprocess 723492 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 723492 ']' 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 723492 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723492 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723492' 00:44:36.227 killing process with pid 723492 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@973 -- # kill 723492 00:44:36.227 00:26:20 keyring_file -- common/autotest_common.sh@978 -- # wait 723492 00:44:36.486 00:44:36.486 real 0m12.305s 00:44:36.486 user 0m29.235s 00:44:36.486 sys 0m3.318s 00:44:36.486 00:26:20 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:36.486 00:26:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.486 ************************************ 00:44:36.486 END TEST keyring_file 00:44:36.486 ************************************ 00:44:36.486 00:26:20 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:36.486 00:26:20 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:36.486 00:26:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:36.486 00:26:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:36.486 00:26:20 -- common/autotest_common.sh@10 -- # set +x 00:44:36.486 ************************************ 00:44:36.486 START TEST keyring_linux 00:44:36.486 ************************************ 00:44:36.486 00:26:20 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:36.486 Joined session keyring: 909522261 00:44:36.746 * Looking for test storage... 00:44:36.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:36.746 00:26:21 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:36.746 00:26:21 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:36.746 00:26:21 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:36.746 00:26:21 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:36.746 00:26:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:36.747 00:26:21 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:36.747 00:26:21 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.747 --rc genhtml_branch_coverage=1 00:44:36.747 --rc genhtml_function_coverage=1 00:44:36.747 --rc genhtml_legend=1 00:44:36.747 --rc geninfo_all_blocks=1 00:44:36.747 --rc geninfo_unexecuted_blocks=1 00:44:36.747 00:44:36.747 ' 00:44:36.747 00:26:21 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.747 --rc genhtml_branch_coverage=1 00:44:36.747 --rc genhtml_function_coverage=1 00:44:36.747 --rc genhtml_legend=1 00:44:36.747 --rc geninfo_all_blocks=1 00:44:36.747 --rc geninfo_unexecuted_blocks=1 00:44:36.747 00:44:36.747 ' 00:44:36.747 00:26:21 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.747 --rc genhtml_branch_coverage=1 00:44:36.747 --rc genhtml_function_coverage=1 00:44:36.747 --rc genhtml_legend=1 00:44:36.747 --rc geninfo_all_blocks=1 00:44:36.747 --rc geninfo_unexecuted_blocks=1 00:44:36.747 00:44:36.747 ' 00:44:36.747 00:26:21 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:36.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.747 --rc genhtml_branch_coverage=1 00:44:36.747 --rc genhtml_function_coverage=1 00:44:36.747 --rc genhtml_legend=1 00:44:36.747 --rc geninfo_all_blocks=1 00:44:36.747 --rc geninfo_unexecuted_blocks=1 00:44:36.747 00:44:36.747 ' 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:36.747 00:26:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:36.747 00:26:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.747 00:26:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.747 00:26:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.747 00:26:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:36.747 00:26:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:36.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:36.747 /tmp/:spdk-test:key0 00:44:36.747 00:26:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:36.747 00:26:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:36.747 00:26:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:37.007 00:26:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:37.007 00:26:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:37.007 /tmp/:spdk-test:key1 00:44:37.007 00:26:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=725589 00:44:37.007 00:26:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:37.007 00:26:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 725589 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 725589 ']' 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:37.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:37.007 00:26:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:37.007 [2024-12-10 00:26:21.290520] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:44:37.007 [2024-12-10 00:26:21.290575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725589 ] 00:44:37.007 [2024-12-10 00:26:21.383598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.007 [2024-12-10 00:26:21.424558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:37.946 [2024-12-10 00:26:22.126496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:37.946 null0 00:44:37.946 [2024-12-10 00:26:22.158545] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:37.946 [2024-12-10 00:26:22.158940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:37.946 540515582 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:37.946 129360505 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=725852 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 725852 /var/tmp/bperf.sock 00:44:37.946 00:26:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 725852 ']' 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:37.946 00:26:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:37.946 [2024-12-10 00:26:22.233708] Starting SPDK v25.01-pre git sha1 969b360d9 / DPDK 24.03.0 initialization... 00:44:37.946 [2024-12-10 00:26:22.233754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725852 ] 00:44:37.946 [2024-12-10 00:26:22.322235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.946 [2024-12-10 00:26:22.360513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:38.884 00:26:23 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.884 00:26:23 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:38.884 00:26:23 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:38.884 00:26:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:38.884 00:26:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:38.884 00:26:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:39.143 00:26:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:39.143 00:26:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:39.401 [2024-12-10 00:26:23.646840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:39.401 nvme0n1 00:44:39.401 00:26:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:39.401 00:26:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:39.401 00:26:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:39.401 00:26:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:39.401 00:26:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:39.401 00:26:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.660 00:26:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:39.660 00:26:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:39.660 00:26:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:39.660 00:26:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:39.660 00:26:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.660 00:26:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:39.660 00:26:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.660 00:26:24 keyring_linux -- keyring/linux.sh@25 -- # sn=540515582 00:44:39.660 00:26:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:39.660 00:26:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:39.660 00:26:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 540515582 == \5\4\0\5\1\5\5\8\2 ]] 00:44:39.660 00:26:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 540515582 00:44:39.919 00:26:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:39.919 00:26:24 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:39.919 Running I/O for 1 seconds... 00:44:40.855 20756.00 IOPS, 81.08 MiB/s 00:44:40.855 Latency(us) 00:44:40.855 [2024-12-09T23:26:25.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:40.855 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:40.855 nvme0n1 : 1.01 20756.56 81.08 0.00 0.00 6145.78 2018.51 7444.89 00:44:40.855 [2024-12-09T23:26:25.328Z] =================================================================================================================== 00:44:40.855 [2024-12-09T23:26:25.328Z] Total : 20756.56 81.08 0.00 0.00 6145.78 2018.51 7444.89 00:44:40.855 { 00:44:40.855 "results": [ 00:44:40.855 { 00:44:40.855 "job": "nvme0n1", 00:44:40.855 "core_mask": "0x2", 00:44:40.855 "workload": "randread", 00:44:40.855 "status": "finished", 00:44:40.855 "queue_depth": 128, 00:44:40.855 "io_size": 4096, 00:44:40.855 "runtime": 1.006188, 00:44:40.855 "iops": 20756.558416518583, 00:44:40.855 "mibps": 81.08030631452571, 00:44:40.855 "io_failed": 0, 00:44:40.855 "io_timeout": 0, 00:44:40.855 "avg_latency_us": 6145.779686013886, 00:44:40.855 "min_latency_us": 2018.5088, 00:44:40.855 "max_latency_us": 7444.8896 00:44:40.855 } 00:44:40.855 ], 00:44:40.855 "core_count": 1 00:44:40.855 } 00:44:40.855 00:26:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:40.855 00:26:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:41.113 00:26:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:41.113 00:26:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:41.113 00:26:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:41.113 00:26:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:41.113 00:26:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:41.113 00:26:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.372 00:26:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:41.372 00:26:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:41.372 00:26:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:41.372 00:26:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.372 00:26:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.372 00:26:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.372 [2024-12-10 00:26:25.839570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:41.372 [2024-12-10 00:26:25.840299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8b210 (107): Transport endpoint is not connected 00:44:41.372 [2024-12-10 00:26:25.841293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8b210 (9): Bad file descriptor 00:44:41.372 [2024-12-10 00:26:25.842295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:41.372 [2024-12-10 00:26:25.842308] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:41.372 [2024-12-10 00:26:25.842318] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:41.372 [2024-12-10 00:26:25.842328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:41.372 request: 00:44:41.372 { 00:44:41.372 "name": "nvme0", 00:44:41.372 "trtype": "tcp", 00:44:41.372 "traddr": "127.0.0.1", 00:44:41.372 "adrfam": "ipv4", 00:44:41.372 "trsvcid": "4420", 00:44:41.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.372 "prchk_reftag": false, 00:44:41.372 "prchk_guard": false, 00:44:41.372 "hdgst": false, 00:44:41.372 "ddgst": false, 00:44:41.372 "psk": ":spdk-test:key1", 00:44:41.372 "allow_unrecognized_csi": false, 00:44:41.372 "method": "bdev_nvme_attach_controller", 00:44:41.372 "req_id": 1 00:44:41.372 } 00:44:41.372 Got JSON-RPC error response 00:44:41.372 response: 00:44:41.372 { 00:44:41.372 "code": -5, 00:44:41.372 "message": "Input/output error" 00:44:41.372 } 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@33 -- # sn=540515582 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 540515582 00:44:41.631 1 links removed 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@33 -- # sn=129360505 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 129360505 00:44:41.631 1 links removed 00:44:41.631 00:26:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 725852 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 725852 ']' 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 725852 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725852 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725852' 00:44:41.631 killing process with pid 725852 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 725852 00:44:41.631 Received shutdown signal, test time was about 1.000000 seconds 00:44:41.631 00:44:41.631 Latency(us) 00:44:41.631 [2024-12-09T23:26:26.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:41.631 [2024-12-09T23:26:26.104Z] =================================================================================================================== 00:44:41.631 [2024-12-09T23:26:26.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:41.631 00:26:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 725852 00:44:41.631 00:26:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 725589 00:44:41.631 00:26:26 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 725589 ']' 00:44:41.631 00:26:26 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 725589 00:44:41.631 00:26:26 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725589 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725589' 00:44:41.890 killing process with pid 725589 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@973 -- # kill 725589 00:44:41.890 00:26:26 keyring_linux -- common/autotest_common.sh@978 -- # wait 725589 00:44:42.150 00:44:42.150 real 0m5.578s 00:44:42.150 user 0m10.098s 00:44:42.150 sys 0m1.769s 00:44:42.150 00:26:26 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:42.150 00:26:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:42.150 ************************************ 00:44:42.150 END TEST keyring_linux 00:44:42.150 ************************************ 00:44:42.150 00:26:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:42.150 00:26:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:42.150 00:26:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:42.150 00:26:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:42.150 00:26:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:42.150 00:26:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:42.150 00:26:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:42.150 00:26:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:42.151 00:26:26 -- common/autotest_common.sh@10 -- # set +x 00:44:42.151 00:26:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:42.151 00:26:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:42.151 00:26:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:42.151 00:26:26 -- common/autotest_common.sh@10 -- # set +x 00:44:48.723 INFO: APP EXITING 00:44:48.723 INFO: killing all VMs 00:44:48.723 INFO: killing vhost app 00:44:48.723 INFO: EXIT DONE 00:44:52.016 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:52.016 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:52.276 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:52.536 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:44:55.829 Cleaning 00:44:55.829 Removing: /var/run/dpdk/spdk0/config 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:55.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:56.089 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:56.089 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:56.089 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:56.089 Removing: /var/run/dpdk/spdk1/config 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:56.089 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:56.089 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:56.089 Removing: /var/run/dpdk/spdk2/config 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:56.089 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:56.089 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:56.089 Removing: /var/run/dpdk/spdk3/config 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:56.089 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:56.089 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:56.089 Removing: /var/run/dpdk/spdk4/config 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:56.089 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:56.089 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:56.089 Removing: /dev/shm/bdev_svc_trace.1 00:44:56.089 Removing: /dev/shm/nvmf_trace.0 00:44:56.089 Removing: /dev/shm/spdk_tgt_trace.pid212667 00:44:56.089 Removing: /var/run/dpdk/spdk0 00:44:56.089 Removing: /var/run/dpdk/spdk1 00:44:56.089 Removing: /var/run/dpdk/spdk2 00:44:56.089 Removing: /var/run/dpdk/spdk3 00:44:56.089 Removing: /var/run/dpdk/spdk4 00:44:56.349 Removing: /var/run/dpdk/spdk_pid210182 00:44:56.349 Removing: /var/run/dpdk/spdk_pid211436 00:44:56.349 Removing: /var/run/dpdk/spdk_pid212667 00:44:56.349 Removing: /var/run/dpdk/spdk_pid213376 00:44:56.349 Removing: /var/run/dpdk/spdk_pid214459 00:44:56.349 Removing: /var/run/dpdk/spdk_pid214727 00:44:56.349 Removing: /var/run/dpdk/spdk_pid215590 00:44:56.349 Removing: /var/run/dpdk/spdk_pid215850 00:44:56.349 Removing: /var/run/dpdk/spdk_pid216234 00:44:56.349 Removing: /var/run/dpdk/spdk_pid217951 00:44:56.349 Removing: /var/run/dpdk/spdk_pid219142 00:44:56.349 Removing: /var/run/dpdk/spdk_pid219474 00:44:56.349 Removing: /var/run/dpdk/spdk_pid219905 00:44:56.349 Removing: /var/run/dpdk/spdk_pid220362 00:44:56.349 Removing: /var/run/dpdk/spdk_pid220725 00:44:56.349 Removing: /var/run/dpdk/spdk_pid220920 00:44:56.349 Removing: /var/run/dpdk/spdk_pid221073 00:44:56.349 Removing: /var/run/dpdk/spdk_pid221374 00:44:56.349 Removing: /var/run/dpdk/spdk_pid222297 00:44:56.349 Removing: /var/run/dpdk/spdk_pid225417 00:44:56.349 Removing: /var/run/dpdk/spdk_pid225731 00:44:56.349 Removing: /var/run/dpdk/spdk_pid226070 00:44:56.349 Removing: /var/run/dpdk/spdk_pid226207 00:44:56.349 Removing: /var/run/dpdk/spdk_pid226759 00:44:56.349 Removing: /var/run/dpdk/spdk_pid226850 00:44:56.349 Removing: /var/run/dpdk/spdk_pid227355 00:44:56.349 Removing: /var/run/dpdk/spdk_pid227600 00:44:56.349 Removing: /var/run/dpdk/spdk_pid227904 00:44:56.349 Removing: /var/run/dpdk/spdk_pid228095 00:44:56.349 Removing: /var/run/dpdk/spdk_pid228212 00:44:56.349 Removing: /var/run/dpdk/spdk_pid228468 00:44:56.349 Removing: /var/run/dpdk/spdk_pid228990 00:44:56.349 Removing: /var/run/dpdk/spdk_pid229146 00:44:56.349 Removing: /var/run/dpdk/spdk_pid229477 00:44:56.349 Removing: /var/run/dpdk/spdk_pid233773 00:44:56.349 Removing: /var/run/dpdk/spdk_pid238402 00:44:56.349 Removing: /var/run/dpdk/spdk_pid249538 00:44:56.349 Removing: /var/run/dpdk/spdk_pid250152 00:44:56.349 Removing: /var/run/dpdk/spdk_pid254877 00:44:56.349 Removing: /var/run/dpdk/spdk_pid255159 00:44:56.349 Removing: /var/run/dpdk/spdk_pid259788 00:44:56.349 Removing: /var/run/dpdk/spdk_pid266077 00:44:56.349 Removing: /var/run/dpdk/spdk_pid268991 00:44:56.349 Removing: /var/run/dpdk/spdk_pid280127 00:44:56.349 Removing: /var/run/dpdk/spdk_pid289817 00:44:56.349 Removing: /var/run/dpdk/spdk_pid291657 00:44:56.349 Removing: /var/run/dpdk/spdk_pid293003 00:44:56.349 Removing: /var/run/dpdk/spdk_pid311219 00:44:56.349 Removing: /var/run/dpdk/spdk_pid315521 00:44:56.349 Removing: /var/run/dpdk/spdk_pid364636 00:44:56.349 Removing: /var/run/dpdk/spdk_pid370436 00:44:56.349 Removing: /var/run/dpdk/spdk_pid376619 00:44:56.349 Removing: /var/run/dpdk/spdk_pid383541 00:44:56.349 Removing: /var/run/dpdk/spdk_pid383617 00:44:56.349 Removing: /var/run/dpdk/spdk_pid384510 00:44:56.349 Removing: /var/run/dpdk/spdk_pid385304 00:44:56.349 Removing: /var/run/dpdk/spdk_pid386331 00:44:56.349 Removing: /var/run/dpdk/spdk_pid386883 00:44:56.349 Removing: /var/run/dpdk/spdk_pid386886 00:44:56.349 Removing: /var/run/dpdk/spdk_pid387154 00:44:56.349 Removing: /var/run/dpdk/spdk_pid387307 00:44:56.608 Removing: /var/run/dpdk/spdk_pid387449 00:44:56.609 Removing: /var/run/dpdk/spdk_pid388747 00:44:56.609 Removing: /var/run/dpdk/spdk_pid389637 00:44:56.609 Removing: /var/run/dpdk/spdk_pid390592 00:44:56.609 Removing: /var/run/dpdk/spdk_pid391124 00:44:56.609 Removing: /var/run/dpdk/spdk_pid391135 00:44:56.609 Removing: /var/run/dpdk/spdk_pid391402 00:44:56.609 Removing: /var/run/dpdk/spdk_pid392841 00:44:56.609 Removing: /var/run/dpdk/spdk_pid393995 00:44:56.609 Removing: /var/run/dpdk/spdk_pid402648 00:44:56.609 Removing: /var/run/dpdk/spdk_pid432802 00:44:56.609 Removing: /var/run/dpdk/spdk_pid437612 00:44:56.609 Removing: /var/run/dpdk/spdk_pid439432 00:44:56.609 Removing: /var/run/dpdk/spdk_pid441289 00:44:56.609 Removing: /var/run/dpdk/spdk_pid441518 00:44:56.609 Removing: /var/run/dpdk/spdk_pid441571 00:44:56.609 Removing: /var/run/dpdk/spdk_pid441834 00:44:56.609 Removing: /var/run/dpdk/spdk_pid442420 00:44:56.609 Removing: /var/run/dpdk/spdk_pid444407 00:44:56.609 Removing: /var/run/dpdk/spdk_pid445401 00:44:56.609 Removing: /var/run/dpdk/spdk_pid445839 00:44:56.609 Removing: /var/run/dpdk/spdk_pid448114 00:44:56.609 Removing: /var/run/dpdk/spdk_pid448672 00:44:56.609 Removing: /var/run/dpdk/spdk_pid449492 00:44:56.609 Removing: /var/run/dpdk/spdk_pid453966 00:44:56.609 Removing: /var/run/dpdk/spdk_pid459835 00:44:56.609 Removing: /var/run/dpdk/spdk_pid459836 00:44:56.609 Removing: /var/run/dpdk/spdk_pid459837 00:44:56.609 Removing: /var/run/dpdk/spdk_pid464447 00:44:56.609 Removing: /var/run/dpdk/spdk_pid473740 00:44:56.609 Removing: /var/run/dpdk/spdk_pid477832 00:44:56.609 Removing: /var/run/dpdk/spdk_pid484337 00:44:56.609 Removing: /var/run/dpdk/spdk_pid485690 00:44:56.609 Removing: /var/run/dpdk/spdk_pid487165 00:44:56.609 Removing: /var/run/dpdk/spdk_pid488622 00:44:56.609 Removing: /var/run/dpdk/spdk_pid493593 00:44:56.609 Removing: /var/run/dpdk/spdk_pid498327 00:44:56.609 Removing: /var/run/dpdk/spdk_pid502701 00:44:56.609 Removing: /var/run/dpdk/spdk_pid510703 00:44:56.609 Removing: /var/run/dpdk/spdk_pid510760 00:44:56.609 Removing: /var/run/dpdk/spdk_pid516288 00:44:56.609 Removing: /var/run/dpdk/spdk_pid516548 00:44:56.609 Removing: /var/run/dpdk/spdk_pid516807 00:44:56.609 Removing: /var/run/dpdk/spdk_pid517151 00:44:56.609 Removing: /var/run/dpdk/spdk_pid517262 00:44:56.609 Removing: /var/run/dpdk/spdk_pid522127 00:44:56.609 Removing: /var/run/dpdk/spdk_pid522781 00:44:56.609 Removing: /var/run/dpdk/spdk_pid527440 00:44:56.609 Removing: /var/run/dpdk/spdk_pid530301 00:44:56.609 Removing: /var/run/dpdk/spdk_pid536120 00:44:56.609 Removing: /var/run/dpdk/spdk_pid542118 00:44:56.609 Removing: /var/run/dpdk/spdk_pid551297 00:44:56.609 Removing: /var/run/dpdk/spdk_pid559209 00:44:56.609 Removing: /var/run/dpdk/spdk_pid559226 00:44:56.609 Removing: /var/run/dpdk/spdk_pid579800 00:44:56.609 Removing: /var/run/dpdk/spdk_pid580349 00:44:56.609 Removing: /var/run/dpdk/spdk_pid581143 00:44:56.609 Removing: /var/run/dpdk/spdk_pid581686 00:44:56.609 Removing: /var/run/dpdk/spdk_pid582541 00:44:56.609 Removing: /var/run/dpdk/spdk_pid583077 00:44:56.609 Removing: /var/run/dpdk/spdk_pid583684 00:44:56.869 Removing: /var/run/dpdk/spdk_pid584285 00:44:56.869 Removing: /var/run/dpdk/spdk_pid588929 00:44:56.869 Removing: /var/run/dpdk/spdk_pid589198 00:44:56.869 Removing: /var/run/dpdk/spdk_pid595430 00:44:56.869 Removing: /var/run/dpdk/spdk_pid595600 00:44:56.869 Removing: /var/run/dpdk/spdk_pid601420 00:44:56.869 Removing: /var/run/dpdk/spdk_pid606522 00:44:56.869 Removing: /var/run/dpdk/spdk_pid616543 00:44:56.869 Removing: /var/run/dpdk/spdk_pid617294 00:44:56.869 Removing: /var/run/dpdk/spdk_pid621591 00:44:56.869 Removing: /var/run/dpdk/spdk_pid621887 00:44:56.869 Removing: /var/run/dpdk/spdk_pid626384 00:44:56.869 Removing: /var/run/dpdk/spdk_pid632485 00:44:56.869 Removing: /var/run/dpdk/spdk_pid635057 00:44:56.869 Removing: /var/run/dpdk/spdk_pid645838 00:44:56.869 Removing: /var/run/dpdk/spdk_pid655643 00:44:56.869 Removing: /var/run/dpdk/spdk_pid657466 00:44:56.869 Removing: /var/run/dpdk/spdk_pid658267 00:44:56.869 Removing: /var/run/dpdk/spdk_pid675522 00:44:56.869 Removing: /var/run/dpdk/spdk_pid679650 00:44:56.869 Removing: /var/run/dpdk/spdk_pid682452 00:44:56.869 Removing: /var/run/dpdk/spdk_pid690959 00:44:56.869 Removing: /var/run/dpdk/spdk_pid690971 00:44:56.869 Removing: /var/run/dpdk/spdk_pid697045 00:44:56.869 Removing: /var/run/dpdk/spdk_pid699055 00:44:56.869 Removing: /var/run/dpdk/spdk_pid701039 00:44:56.869 Removing: /var/run/dpdk/spdk_pid702228 00:44:56.869 Removing: /var/run/dpdk/spdk_pid704261 00:44:56.869 Removing: /var/run/dpdk/spdk_pid705443 00:44:56.869 Removing: /var/run/dpdk/spdk_pid714820 00:44:56.869 Removing: /var/run/dpdk/spdk_pid715344 00:44:56.869 Removing: /var/run/dpdk/spdk_pid715873 00:44:56.869 Removing: /var/run/dpdk/spdk_pid718319 00:44:56.869 Removing: /var/run/dpdk/spdk_pid718849 00:44:56.869 Removing: /var/run/dpdk/spdk_pid719383 00:44:56.869 Removing: /var/run/dpdk/spdk_pid723492 00:44:56.869 Removing: /var/run/dpdk/spdk_pid723555 00:44:56.869 Removing: /var/run/dpdk/spdk_pid725197 00:44:56.869 Removing: /var/run/dpdk/spdk_pid725589 00:44:56.869 Removing: /var/run/dpdk/spdk_pid725852 00:44:56.869 Clean 00:44:56.869 00:26:41 -- common/autotest_common.sh@1453 -- # return 0 00:44:56.869 00:26:41 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:56.869 00:26:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:56.869 00:26:41 -- common/autotest_common.sh@10 -- # set +x 00:44:57.129 00:26:41 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:57.129 00:26:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:57.129 00:26:41 -- common/autotest_common.sh@10 -- # set +x 00:44:57.129 00:26:41 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:57.129 00:26:41 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:57.129 00:26:41 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:57.129 00:26:41 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:57.129 00:26:41 -- spdk/autotest.sh@398 -- # hostname 00:44:57.129 00:26:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:57.129 geninfo: WARNING: invalid characters removed from testname! 00:45:19.326 00:27:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:21.234 00:27:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:23.141 00:27:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:24.520 00:27:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:26.428 00:27:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:28.335 00:27:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:30.243 00:27:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:30.243 00:27:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:30.243 00:27:14 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:30.243 00:27:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:30.243 00:27:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:30.243 00:27:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:30.243 + [[ -n 129829 ]] 00:45:30.243 + sudo kill 129829 00:45:30.255 [Pipeline] } 00:45:30.270 [Pipeline] // stage 00:45:30.276 [Pipeline] } 00:45:30.291 [Pipeline] // timeout 00:45:30.296 [Pipeline] } 00:45:30.311 [Pipeline] // catchError 00:45:30.316 [Pipeline] } 00:45:30.331 [Pipeline] // wrap 00:45:30.338 [Pipeline] } 00:45:30.351 [Pipeline] // catchError 00:45:30.361 [Pipeline] stage 00:45:30.363 [Pipeline] { (Epilogue) 00:45:30.378 [Pipeline] catchError 00:45:30.379 [Pipeline] { 00:45:30.394 [Pipeline] echo 00:45:30.396 Cleanup processes 00:45:30.403 [Pipeline] sh 00:45:30.696 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:30.696 738007 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:30.713 [Pipeline] sh 00:45:31.002 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:31.002 ++ grep -v 'sudo pgrep' 00:45:31.002 ++ awk '{print $1}' 00:45:31.002 + sudo kill -9 00:45:31.002 + true 00:45:31.014 [Pipeline] sh 00:45:31.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:31.301 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:45:37.874 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:45:42.086 [Pipeline] sh 00:45:42.375 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:42.375 Artifacts sizes are good 00:45:42.389 [Pipeline] archiveArtifacts 00:45:42.395 Archiving artifacts 00:45:42.830 [Pipeline] sh 00:45:43.158 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:43.172 [Pipeline] cleanWs 00:45:43.182 [WS-CLEANUP] Deleting project workspace... 00:45:43.182 [WS-CLEANUP] Deferred wipeout is used... 00:45:43.197 [WS-CLEANUP] done 00:45:43.205 [Pipeline] } 00:45:43.221 [Pipeline] // catchError 00:45:43.233 [Pipeline] sh 00:45:43.518 + logger -p user.info -t JENKINS-CI 00:45:43.527 [Pipeline] } 00:45:43.540 [Pipeline] // stage 00:45:43.545 [Pipeline] } 00:45:43.559 [Pipeline] // node 00:45:43.564 [Pipeline] End of Pipeline 00:45:43.606 Finished: SUCCESS